Deciding Asynchronous Hyperproperties for Recursive Programs

We introduce a novel logic for asynchronous hyperproperties with a new mechanism to identify relevant positions on traces. While the new logic is more expressive than a related logic presented recently by Bozzelli et al., we obtain the same complexity of the model checking problem for finite state models. Beyond this, we study the model checking problem of our logic for pushdown models. We argue that the combination of asynchronicity and a non-regular model class studied in this paper constitutes the first suitable approach for hyperproperty model checking against recursive programs.


INTRODUCTION
In recent years, hyperproperties have received increased interest in verification, static analysis and other areas of computer science.While traditional trace properties provide a unifying concept for phenomena that can be captured by considering traces of a system individually, hyperproperties provide such a concept for phenomena that require us to look at multiple traces of a system simultaneously.For example, A state annotated with the proposition must eventually be reached is a trace property while The number of occurrences of is the same on all traces is a hyperproperty.Many important requirements in information security like observational determinism or non-interference can be described by hyperproperties [Clarkson and Schneider 2010].They also provide a natural framework for the analysis of concurrent systems [Bonakdarpour et al. 2018].
As traditional specification logics like LTL are suitable for trace properties only, new hyperlogics were developed to specify hyperproperties.A prominent example is HyperLTL [Clarkson et al. 2014] which adds quantification over named traces to LTL and thus enables the simultaneous analysis of multiple traces.These hyperlogics first only followed traces synchronously, but software is inherently asynchronous [Baumeister et al. 2021], especially concurrent software [Finkbeiner 2017], and therefore new hyperlogics that can relate traces at different time points are required.For example, when checking information-flow policies on concurrent programs, traces might only be required to be equivalent up to stuttering [Zdancewic and Myers 2003] and thus matching observation points on different traces are not perfectly aligned.Another example for an asynchronous hyperproperty is the hyperproperty The number of occurrences of is the same on all traces from above since matching -positions on different traces may be arbitrarily far apart.In [Gutsfeld et al. 2021], a systematic study of asynchronous hyperproperties was conducted, including the introduction of the temporal fixpoint calculus .While is able to capture the class of asynchronous hyperproperties nicely, its model checking problem is highly undecidable and even the decidable fragments presented in [Gutsfeld et al. 2021] have a high complexity.Later, the asynchronous hyperlogic HyperLTL was introduced in [Bozzelli et al. 2021] that has an interesting decidable fragment for the model checking problem with lower complexity, simple HyperLTL .It extends HyperLTL by modalities that jump from the current position on each trace to the next position where some formula from a set of LTL formulae defining an indistinguishability criterion takes a different value.While accounting for asynchronicity is a necessary feature of hyperlogics for software systems, current model checking procedures for logics such as HyperLTL are still insufficient as they only handle finite models which cannot capture many programs suitably due to the lack of a representation for the call stack.Moreover, expressivity of asynchronous hyperlogics can be increased largely beyond simple HyperLTL without increasing the complexity of the model checking problem for finite models.For example, HyperLTL lacks the ability to express arbitrary -regular properties and cannot express properties like The number of occurrences of is the same on all traces.
In this paper, we address these shortcomings by introducing a new asynchronous hyperlogic based on the linear-time -calculus extending HyperLTL in the following respects: 1) it provides a simpler jump mechanism that directly characterises positions of interest instead of an indistinguishability criterion.2) it supports different jump criteria for different traces; 3) for the specification of the jump criterion and basic properties of single traces, it allows linear-time -calculus formulae with CaRet-like modalities [Alur et al. 2004], i.e. modalities inspecting the call/return behaviour of recursive programs; and 4) it offers fixpoint operators in multitrace formulae.Moreover, we provide variants of the modalities with a well-aligned semantics to enable decidability of model checking for pushdown systems (PDS), a well established model of recursive programs.This novel concept requires the traces under consideration to have a similar call-return behaviour.
We call the new logic mumbling where the notion of mumbling is a counterpart to the notion of stuttering from HyperLTL similar to how stuttering and mumbling are used as counterparts in a classic paper by Brookes [Brookes 1996]: Stuttering describes the repetition of equal states while mumbling describes suppression of intermediate states.It turns out that mumbling is a more powerful jump mechanism than stuttering if only LTL modalities are used in jump criteria.Surprisingly, the difference vanishes if arbitrary fixpoint operators are allowed for the definition of jump criteria.
The use of fixpoints on the trace and multitrace level gives mumbling the power to specify arbitrary -regular properties on both levels.Despite this and the other additions, the model checking problem against finite state models has the same complexity as for the less expressive logic simple HyperLTL under analogous restrictions necessary for decidability.In addition, it turns out that the model checking problem for PDS is decidable for mumbling with well-aligned modalities and the above-mentioned restriction even though already synchronous HyperLTL model checking is undecidable for such models [Pommellet and Touili 2018].Thus, our approach provides the first model checking algorithm for an asynchronous hyperlogic on PDS.Moreover, it is the first application of CaRet-like non-regular operators to the hyperproperty setting.In summary: • We introduce mumbling , an asynchronous hyperlogic with several extensions compared to HyperLTL and present examples highlighting the merits of the new logic (Section 3).• We show that the finite state model checking complexity for mumbling coincides with that of simple HyperLTL under an analogous restriction despite the extensions (Section 4).
• We introduce well-aligned modalities and present a technique able to handle these modalities for decidable PDS model checking (Section 5).The technique is also of independent interest as it can be transferred to other hyperlogics for decidable PDS model checking.• We compare mumbling to stuttering with respect to expressivity and show that it is more expressive for criteria defined by LTL formulae and equally expressive in the presence of fixpoints (Section 6).These results require some heavy technical work due to the intricacies of the definition of stuttering and mumbling modalities.

PRELIMINARIES
Without further ado, we introduce notation, models and results used throughout the paper.This section may be skipped on first reading and can be consulted for reference later.
Pushdown Systems and Kripke Structures.We start by introducing models for recursive systems and systems with a finite state space.For this, let AP be a finite set of atomic propositions, Θ be a finite set of stack symbols and ⊥ ∉ Θ be a special bottom of stack symbol.We model recursive systems by structures PD = ( , 0 , , ) called pushdown systems (PDS) where is a finite set of control locations, 0 ⊆ is a set of initial control locations and : → 2 AP is a labeling function.The transition relation ⊆ ( × ) ∪( × × Θ) ∪( × Θ × ) consists of three kinds of transitions: Internal transitions from × , push transitions from × × Θ and return transitions from × Θ × .The semantics of a PDS PD = ( , 0 , , ) is based on configurations, i.e. pairs = ( , ) where ∈ is a control location and ∈ Θ * ⊥ is a stack content ending in ⊥.The set of all configurations of PD is denoted by C(PD).For the definition of the semantics, let = ( , ) and ′ = ( ′ , ′ ) be configurations.We call ′ an internal successor of , denoted by → int ′ , if there is a transition ( , ′ ) ∈ and = ′ .We call ′ a call successor of , denoted by → call ′ , if there is a transition ( , ′ , ) ∈ and ′ = .We call ′ a return successor of , denoted by → ret ′ , if there is a transition ( , , ′ ) ∈ and = ′ .A path of PD is an infinite alternating sequence = 0 0 1 1 • • • ∈ (C(PD) • {int, call, ret}) such that 0 = ( 0 , ⊥) for some 0 ∈ 0 and → +1 holds for all ≥ 0. Paths of a system induce sequences of visible system behaviour called traces; an infinite trace is an infinite sequence from Traces := (2 AP • {int, call, ret}) and a finite trace is a finite sequence from (2 AP • {int, call, ret}) * • 2 AP .The trace induced by the path is ( 0 ) 0 ( 1 ) 1 • • • ∈ Traces where (( , )) is given by ( ).We write Paths(PD) for the set of paths of PD and Traces(PD) for the set of traces induced by paths in Paths(PD).Our model for finite state systems, Kripke structures, is defined as a special case of a PDS PD = ( , 0 , , ) where ⊆ × , i.e. a PDS with only internal transitions.In order to highlight this case, we use K instead of PD to denote Kripke structures.As all transition labels are int in traces generated by Kripke structures, we omit these labels and write traces as sequences from (2 AP ) .Also, we introduce fair variants of these two system models.A fair pushdown system is a pair (PD, ) where PD = ( , 0 , , ) is a PDS and ⊆ is a set of target states.Paths(PD, ) is the set of paths of PD that visit states in infinitely often (fair paths).Then, Traces(PD, ) is the set of traces induced by Paths(PD, ).A fair Kripke structure is defined analogously.
Words and traces.We introduce our notation for common operations on words.For infinite words = 0 1 • • • ∈ Σ over an alphabet Σ, we use ( ) = to denote the letter at position of and [ ] = +1 . . .for the suffix of starting at position .Furthermore, for ≤ we write [ , ] = +1 . . .for the subword from position to position of .For traces tr = 0 0 1 1 . . ., we slightly alter these notations in order to improve readability and write tr ( ) for the symbol , tr [ ] for the infinite trace +1 +1 . . .and tr [ , ] for the finite trace . . .−1 −1 .The same applies to paths.For finite and infinite traces tr, we use tr |ts to denote their restriction to their transition symbols, i.e. if tr = 0 0 1 1 . . .−1 then tr |ts = 0 1 . . .−1 .Also, we introduce some successor and predecessor functions as in [Alur et al. 2004].Intuitively, the global successor always moves to the next index and the backwards predecessor moves to the previous index while the abstract successor skips over procedure calls and the caller predecessor moves back to the point where the current procedure was called.Formally, we define several functions : Traces × N 0 → N 0 (or partial functions : Traces × N 0 N 0 ) that are interpreted as follows: if (tr, ) = , then moves from tr ( ) to tr ( ).We define the global successor function succ g : Traces × N 0 → N 0 by succ g (tr, ) = + 1.The backwards predecessor function succ b : Traces × N 0 N 0 is partial where succ b (tr, ) = − 1 if > 0 and is undefined otherwise.For the definition of the remaining two functions, let calls(tr, , ) = |{ | ≤ < and tr |ts ( ) = call}| be the number of calls between positions and on tr and rets(tr, , ) = |{ | ≤ < and tr |ts ( ) = ret}| be the number of returns between positions and on trace tr.Then, the abstract successor function succ a : Traces × N 0 N 0 is the partial function such that succ a (tr, ) = +1, if tr |ts ( ) = int, succ a (tr, ) = min , where = { | > , calls(tr, , ) = rets(tr, , )}, if tr |ts ( ) = call and the set is non-empty, and is undefined otherwise.Our definition of the abstract successor differs slightly from that in [Alur et al. 2004], where it is defined on words over an extended alphabet Σ × {int, call, ret} and moves from a call to the matching return.Instead, we move from the propositional position before a call to the propositional position after the matching return, which is more natural in our scenario since it ensures that both positions have the same stack level.Finally, the caller function succ c : Traces × N 0 N 0 is the partial function such that succ c (tr, ) = max , where = { | < , |ts ( ) = call, calls(tr, + 1, ) = rets(tr, + 1, )}, if the set is non-empty, and is undefined otherwise.
Multi-Automata.In one of our constructions, we use multi-automata [Bouajjani et al. 1997] to represent certain sets of configurations of a PDS.Formally, let PD = ( , 0 , , ) be a PDS with = { 1 , . . ., } and stack alphabet Θ.A PD-multi-automaton is a tuple A = ( , 0 , , ) where is a finite set of states, 0 = { 1 , . . ., } ⊆ is a set of initial states, : × Θ → 2 is a transition function and is a set of final states.The transition relation → ⊆ × Θ * × is the smallest relation such that (i) → for all ∈ and (ii) → ′′ and ′ ∈ ( ′′ , ) implies → • ′ .A configuration = ( , ) is recognised by A iff → for some ∈ .By slight abuse of notation, we sometimes identify with and write = for ∈ and ∈ .The set of configurations recognised by A is denoted by C(A).The following result is a corollary of Proposition 3.1 in [Bouajjani et al. 1997].

P
2.1.For any fair pushdown system (PD, ), there is a PD-multi-automaton A with size linear in |PD| such that C(A) = { ∈ C(PD) | there is a fair path in (PD, ) starting in }.
Visibly Pushdown Automata.Next, we use visibly pushdown automata [Alur and Madhusudan 2004] in some of our constructions.These automata are a variant of conventional pushdown automata, i.e. automata with access to a stack, but have better closure and decidability properties.Their input alphabet is called a finite visibly pushdown alphabet, i.e. an alphabet Σ = Σ i ∪Σ c ∪Σ r partitioned into alphabets Σ i of internal symbols, Σ c of call symbols and Σ r of return symbols.Like PDS, they are defined over a finite stack alphabet Θ and a special bottom of stack symbol ⊥ ∉ Θ. Formally, a (nondeterministic) visibly pushdown automaton (VPA) over Σ and Θ is a tuple A = ( , 0 , , ) where is a finite set of states, 0 ⊆ is a set of initial states and ⊆ is a set of final states.The transition function : × Σ → 2 ∪ 2 × (Θ∪{⊥} ) allows transitions of three different types: (i) if ∈ Σ i , then ( , ) ∈ 2 holds and ( , ) is a set of internal transitions, (ii) if ∈ Σ c , then ( , ) ∈ 2 ×Θ holds and ( , ) is a set of call transitions, and (iii) if ∈ Σ r , then we have ( , ) ∈ 2 × (Θ∪{⊥} ) and ( , ) is a set of return transitions.Intuitively, seeing a symbol from Σ i , Σ c and Σ r forces a VPA to make an internal, a call and a return transition, respectively.
Formally, a run of a VPA A over an infinite word 0 ∈ for infinitely many .A VPA A accepts a word iff there is an accepting run of A over .We use L (A) to denote the set of words accepted by A. For VPA, the following proposition holds: M 2004]).For any VPA, there is a VPA with an exponentially larger number of states for the complement language.The VPA emptiness problem is in PTIME.

2-way
Alternating Jump Automata and their subclasses.We now define 2-way Alternating Jump Automata [Bozzelli 2007], a model that provides a direct way to navigate over input words using the global and abstract successor as well as backwards and caller predecessor types previously defined in this section.The corresponding functions defined on traces previously are straightforwardly extended to words over a visibly pushdown alphabet Σ.Also, we use DIR = {g, a, b, c} for the set of corresponding directions.A 2-way Alternating Jump Automaton (2-AJA) is a tuple A = ( , 0 , , Ω) where is a finite set of states, 0 ⊆ is a set of initial states, Ω : → {0, 1, . . ., } is a priority assignment and : × Σ → B + (DIR × × ) is a transition function where B + (DIR × × ) denotes positive boolean formulae over (DIR × × ).In the transition function, a triple (dir, , ′ ) denotes that if the dir-successor or predecessor exists in the current position , the automaton starts a copy in state at this successor or predecessor and else starts a copy in state ′ at position + 1.We assume that every 2-AJA has two distinct states true and false with priority 0 and 1, respectively, such that (true, ) = (g, true, true) and (false, ) = (g, false, false) for all ∈ Σ.We define several commonly used automata models as special cases of 2-AJA.In particular, an Alternating Parity Automaton (APA) is a 2-AJA with a transition function that maps to B + ({g} × × ).An APA with a priority assignment Ω where Ω( ) ∈ {0, 1} for every and a transition function mapping to disjunctions only is called a Nondeterministic Büchi Automaton (NBA).As usual for NBA, we define the acceptance condition by the set of states with priority 0 and write ( , ) as a set of states.
We now define the semantics of 2-AJA.A tree is a subset of N * such that for every node ∈ N * and every positive integer ∈ N: • ∈ implies (i) ∈ (we then call • a child of ), and (ii) for every 0 < < , • ∈ .We assume every node has at least one child.A path in a tree is a sequence of nodes 0 1 . . .such that 0 = and +1 is a child of for all ∈ N 0 .A ( , )-run of a 2-AJA over an infinite word ) where : → N × is a labelling function that satisfies ( ) = ( , ) and for all ∈ labelled ( ) = ( , ′ ) we have a set {(dir 1 , ′ 1 , ′′ 1 ), . . ., (dir , ′ , ′′ )} satisfying ( ′ , ) and children 1 , . . ., that are labelled as follows: for all 1 ≤ ℎ ≤ , if succ dir ℎ ( , ) is undefined, then . A ( , )-run of an AJA is accepting iff for every path in the run the lowest priority occuring infinitely often on that path is even.A accepts a word iff there is an accepting ( 0 , 0)-run of A over for some 0 ∈ 0 .We write L (A) for the set of words accepted by A. For 2-AJA and their subclasses, the following propositions hold: 2007]).For every 2-AJA with states, there is a VPA with a number of states exponential in accepting the same language.K 2008]).For any APA with states and priorities, there is an NBA with 2 O ( • •log( ) ) states accepting the same language.P 2.5.The emptiness problem is in PSPACE for APA and in NLOGSPACE for NBA.

A MUMBLING HYPERLOGIC
In this section, we introduce mumbling .In Section 3.1, we define the syntax, explain it on a conceptual level and also introduce relevant notations and conventions.Then, in Section 3.2, we present some example applications of mumbling suitable for the model checking of recursive programs.Finally, we define the semantics of the logic formally in Section 3.3.

Syntax of Mumbling
Mumbling is inspired by the hyperlogic HyperLTL [Bozzelli et al. 2021].Like HyperLTL , mumbling is a hyperlogic with trace quantification and asynchronous progression on traces.Unlike HyperLTL however, it is a fixpoint calculus, has more expressive atomic properties, and has a simpler jump criterion.Definition 3.1 (Syntax of mumbling ).Let be a set of trace variables and , be disjoint sets of fixpoint variables.We define three types of mumbling formulae by the following grammar: where ∈ is a trace variable, ∈ and ∈ are fixpoint variables, Δ : → is a successor assignment, ap ∈ AP is an atomic proposition and f ∈ {g, a, c} is a successor or predecessor type.
We introduce some additional syntactical notions.A multitrace formula is closed if every fixpoint variable used in it is bound, i.e. if in as well as all its maximal trace subformulae , fixpoint variables and only occur inside fixpoint formulae .′ and .′ , respectively.We call a hyperproperty formula closed if its maximal multitrace subformula is closed and additionally, every trace variable used in is bound by a quantifier.As usual, we assume that fixpoint variables occur positively in closed formulae, i.e. in scope of an even number of negations inside the corresponding fixpoint formula.We write Sub( ) for the set of subformulae of and base( ) for the set of base formulae of , i.e. the set of trace formulae occurring in a test [ ] or a successor assignment Δ( ) of .The size | | of a hyperproperty formula is defined as the number of its distinct subformulae.The same definitions apply to trace and multitrace formulae and .Before introducing further definitions and examples, we informally describe the intuition behind each type of formula.Trace formulae (denoted ) specify properties of single traces.Here, atomic propositions ap express that ap holds on the current position of the trace.Progress is made via next operators f , which expresses that the f-successor or predecessor exists in the current position and satisfies .Here, f can be one of three kinds of successors or predecessors: g for a global successor, a for an abstract successor, and c for the caller.The latter two successor and predecessor types allow to express richer properties on traces generated by pushdown systems rather than Kripke structures.In addition, we have disjunction ∨ , negation ¬ and fixpoints .to express more involved properties.Formulae of this kind essentially correspond to the logic VP-TL from [Bozzelli 2007], a variant of the linear time -calculus TL [Vardi 1988] with various non-regular next operators as introduced by the logic CaRet [Alur et al. 2004].Multitrace formulae (denoted ) express hyperproperties on a set of named traces 1 , . . ., .Basic properties [ ] express that the trace formula holds in the current position on trace .So-called successor assignments Δ assigning a formula to each trace describe points of interest on the traces.The next operator Δ advances each trace to the next position where Δ( ) holds and checks for on the resulting suffixes.This next operator is inspired by, but different from, the one of HyperLTL , which advances every trace to the next point where the valuation of some formula from a set of trace formulae Γ differs from the current valuation.Also, note that f and Δ , being formulae on different levels, operate quite differently: The former advances a single trace to the f-successor or predecessor while the latter advances all traces according to a successor assignment Δ simultaneously.Again, we have disjunction ∨ , negation ¬ and fixpoints . for more complex properties.Finally, hyperproperty formulae (denoted ) express hyperproperties.Here, we extend specifications by trace quantifiers ∃ .and ∀ .expressing that for some or each trace of a system, respectively, holds if is bound to that trace.
We use common syntactic sugar: In trace formulae , we use true ≡ ap ∨ ¬ap, false ≡ ¬true, . We use the same abbreviations for multitrace formulae .Additionally, we borrow some LTL-modalities as derived operators in order to improve readability: Again, we use the same abbreviations for formulae , this time using Δ operators instead of operators.Using some of these connectives and commonly known equivalences, we can impose additional restrictions on the syntax of mumbling .In particular, we assume a positive form where negation only occurs directly in front of atomic propositions ap in trace formulae and only occur in front of tests [ ] in multitrace formulae.The operator f in trace formulae is not self-dual for f ∈ {a, c}, i.e. the equivalence f ≡ ¬ f ¬ does not hold.We thus use a dual version f ≡ ¬ f ¬ for these two operators to obtain a positive form.Intuitively, while the normal next operator is equivalent to false when the associated successor or predecessor type is undefined, the dual operator is equivalent to true in this case.Next, we assume a strictly guarded form where every fixpoint variable has to be preceded directly by a next operator.Finally, we assume that every fixpoint variable is bound by exactly one fixpoint construction .or . .The same applies to fixpoint variables in trace formulae.As any formula can be transformed into an equivalent formula meeting these requirements, they do not form proper restrictions.They do, however, help us make the automata constructions in Sections 4 and 5 clearer.
We now define fragments and variants of the logic.For trace formulae, TL [Vardi 1988] is the syntactic fragment where only the next operator g is used.If additionally, fixpoints are only used in 1 U g 2 formulae, we obtain the logic LTL.Next, we introduce a name for the logic that uses only a subset of trace formulae.We use mumbling with basis B to denote the subset of mumbling where base( ) ⊆ B for all formulae .We sometimes write mumbling with full basis instead of mumbling to denote the full logic.Finally, we denote the subset of mumbling where all Δ operators use the same successor assignment Δ as mumbling with unique mumbling.In order to compare mumbling with the jump mechanism from HyperLTL [Bozzelli et al. 2021], we define stuttering as a variant of mumbling where Γ operators are used instead of Δ .Given a stuttering assignment Γ : → 2 , the operator Γ advances each trace to the next position with a different valuation of some ∈ Γ( ).We call a jump criterion Γ a stuttering assignment in order to highlight the difference to successor assignments Δ: An assignment Γ specifies positions that are similar and can thus be skipped, while an assignment Δ specifies positions that are of special interest and thus should be advanced to.For this variant, the notions of basis and unique stuttering are defined analogously to the main logic.

Example Properties
Let us discuss the utility of mumbling for the verification of recursive programs using some example hyperproperties and verification scenarios.We focus on properties with unique mumbling, since they are of particular practical interest due to their decidable model checking problem.
As a first example, consider an asynchronous variant of the information flow policy observational determinism [Clarkson and Schneider 2010].Intuitively, it states that a system looks deterministic to a low security user who cannot inspect the secret variables of the system.More precisely, it requires that if two executions of a system initially match on inputs visible to a low security user, then they match on outputs visible to that user all the time.An earlier formulation of this property in HyperLTL from [Clarkson et al. 2014] required the progress in between observation points to be synchronous, i.e. the same number of steps has to be made on all traces.However, this is an unrealistic assumption for many systems.A different formulation of the property in HyperLTL from [Bozzelli et al. 2021] approached the problem by allowing consecutive steps with the same observable outputs on one trace to be matched by a (possibly different) number of steps with the same outputs on the other.However, this formulation can only model a user that is unable to identify that outputs have been performed unless they differ from previous outputs.We suggest a new formulation using the jump mechanism of mumbling .Explicitly labelling observation points by an atomic proposition obs allows us to model many different kinds of low security observers.Our variant of observational determinism is expressed by the formula We can formulate a stronger variant of this property with different successor criteria for different traces.When given a labelling with obs 1 and obs 2 modelling two different observers, we can use the successor assignment { 1 ↦ → obs 1 , 2 ↦ → obs 2 } instead of the previous one.Then, the property requires the system to have indistinguishable behaviour even for two observers who can inspect different sets of states.Note that the use of different successor criteria enables a trace to fulfil both the role of being observed by the first and being observed by the second observer.This variant still implies the previous requirement of indistinguishability of two traces tr 1 , tr 2 inspected by the same observer as the variant asserts that tr 1 observed by observer one is equivalent to tr 2 observed by observer two which in turn is equivalent to tr 2 observed by observer one.
Similarly, one can formulate asynchronous variants of other information flow policies.Clarkson and Schneider, for instance, model a version of non-interference as a hyperproperty with quantifier alternation [Clarkson and Schneider 2010].It requires that for all traces, there exists a trace without high security inputs such that the two traces are indistinguishable to a low security user who can only inspect atomic propositions from a set .An asynchronous variant of this requirement can be expressed by a modification of a HyperLTL formula from [Clarkson et al. 2014] in which a trace without high security inputs is modelled by a trace in which all these inputs have been replaced by a dummy symbol dum: Here, we use a non-atomic test to state that high security inputs on 2 are replaced by dum in all positions including those not inspected by the successor criterion obs.As the test is performed on the first position of the trace, this is an example of filtering traces bound by a quantifier.Indeed, trace filtering motivated Bozzelli et al. [Bozzelli et al. 2021] to specifically include single trace formulae checked on the initial position in their decidable fragment of HyperLTL by a specific condition in the fragment's definition.In contrast, these tests are integrated in mumbling naturally and can be used on later positions as well.For example, assuming call positions for a procedure pr are labelled with pr, we can use the CaRet modality F c to state that the procedure pr is currently in the call stack on trace : [F c pr] .This can prove useful since sometimes in information flow, the requirement of indistinguishability for low security users need not be as strict, e.g. if information is declassified when it is sent via an encrypted message.In such a case, we wouldn't want to require indistinguishability inside a procedure pr that is used to send encrypted messages.By replacing the requirement ap∈ [ap] 1 ↔ [ap] 2 in the non-interference property with we require indistinguishability only when neither 1 nor 2 is currently inside the procedure pr.
So far, we have focussed on what hyperproperties can be expressed in mumbling and only implicitly considered the system model.Besides Kripke structures, for which model checking specifications with unique mumbling is decidable, we consider pushdown systems for which hyperproperty verification is inherently difficult: As we will see in Section 5, the model checking problem for pushdown systems is undecidable already for fixed hyperproperties from the literature that are expressible in synchronous hyperlogics like HyperLTL.While this implies that further restrictions are needed for decidability, we want these restrictions to be as lax as possible in order to be able to analyse as many systems as possible precisely.In this paper, we propose well-alignedness, a condition introduced and discussed later.Intuitively, while traces satisfying this condition must reach the same stack level on all observation points, they may differ e.g. by executing procedures in between.As motivation for this restriction, consider the following two lines of thought.First, one of the main motivations for studying the verification of hyperproperties are security hyperproperties like the ones presented in this section.These hyperproperties express in different ways that certain traces of a system are very similar.We argue in Section 5.1 that it is reasonable to expect that in systems constructed with the aim to have very similar traces, stack actions along these traces are alike as well.Since pairs of traces from such systems satisfy well-alignedness by construction, they can be analysed precisely with the methods developed in this paper.Secondly, a precise analysis under well-alignedness is also possible for many systems in which stack actions are not perfectly aligned.For example, a scenario where one execution uses a recursive procedure call in between observation points while another one only performs iterative calculations constitutes a strong deviation from a perfect alignment of stack actions.However, differences like this are still allowed under well-alignedness.Thus, a precise analysis is possible in this scenario as well.

Semantics of Mumbling
We now formally define the semantics of mumbling .We do this incrementally, starting with trace formulae, then moving on to multitrace and hyperproperty formulae and introducing required notation on the way.The semantics of a trace formula is defined with respect to a trace tr ∈ Traces and a fixpoint variable assignment V : → 2 N 0 assigning sets of positions to fixpoint variables.Intuitively, tr V ⊆ N 0 is the set of indices such that if each fixpoint variable is interpreted to hold in the positions given by the set V ( ), holds on the suffix tr [ ] of tr.Definition 3.2 (Trace semantics).The semantics of trace formulae is given by: We use V 0 := .∅for the empty fixpoint variable assignment over and write tr for tr V 0 .For the semantics of multitrace formulae, we introduce the notion of trace assignments.
A trace assignment is a partial function Π : Traces.If Π maps to traces from T ⊆ Traces only, we say that is is a trace assignment over T .In mumbling , progress is made via successor assignments Δ that assign a trace formula to every trace.For single traces, we define succ : Traces × N 0 → N 0 such that succ (tr, ) = min , where = { | > , ∈ tr }, if the set is non-empty, and succ (tr, ) = + 1 otherwise.Thus, succ advances a trace to the next position where holds, if one exists, and the immediate successor otherwise.For trace assignments and a successor assignment Δ, progress is described by the function succ Δ that is defined as succ Δ (Π, ( 1 , ..., )) = (succ Δ ( 1 ) (Π( 1 ), 1 ), . . ., succ Δ ( ) (Π( ), )).We also define the -fold application of both of these successor functions: succ is the -fold application of the -successor function defined by succ 0 (tr, ) = and succ +1 (tr, ) = succ (tr, succ (tr, )).succ Δ is defined analogously.For stuttering assignments Γ, we introduce similar notations: For a set of trace formulae, we define succ : Traces × N 0 → N 0 such that succ (tr, ) = min { | > , ∈ tr ∈ tr for some ∈ }, if the set is non-empty, and succ (tr, ) = + 1 otherwise.succ Γ and its -fold application is then defined analogous to the same notion for Δ.
The semantics of a multitrace formula is defined with respect to a trace assignment Π and fixpoint variable assignment W : → 2 N 0 where = |dom(Π)|.In the definition, Π W ⊆ N 0 is the set of vectors ( 1 , . . ., ) such that in the context of fixpoint variable assignment W, the combination of suffixes Π( 1 ) [ 1 ], . . ., Π( ) [ ] satisfies .Definition 3.3 (Multitrace semantics).The semantics of multitrace formulae is given by: As formalised in Appendix A.2, .and .characterise fixpoints.We again use W 0 := .∅for the empty fixpoint variable assignment over and write Π for Π W 0 .Now, we define the semantics of hyperproperty formulae.In this definition, Π | = T denotes that the trace assignment Π over T satisfies .Definition 3.4 (Hyperproperty semantics).The semantics of hyperproperty formulae is given by: Remark 3.5.On traces generated by a Kripke structure K, a is equivalent to g and c is equivalent to false.Thus, any hyperproperty formula can be translated to a hyperproperty formula ′ without these two operators such that We investigate the following decision problems: • Fair Finite State Model Checking: given a closed mumbling hyperproperty formula and a fair Kripke structure (K, ), decide whether (K, ) | = holds.
• Fair Pushdown Model Checking: given a closed mumbling hyperproperty formula and a fair PDS (PD, ), decide whether (PD, ) | = holds.Note that the fair model checking problem is stronger than the traditional model checking problem since an instance of the latter can trivially be transformed into an instance of the former by declaring all states of the input structure target states.It is convenient to consider this stronger variant for the reduction in Section 4.1.

FAIR FINITE STATE MODEL CHECKING
In this section, we solve the fair finite state model checking problem.We show that the complexity is the same as for HyperLTL model checking despite the addition of fixpoints, non-atomic tests and a new jump criterion.We consider two restrictions.The first one is a restriction to unique mumbling.This is necessary as the problem is already undecidable for HyperLTL without the corresponding restriction [Bozzelli et al. 2021] which transfers to mumbling using the reduction from Theorem 6.1 (presented in Section 6).The second restriction is to consider only the basis AP.As we show in Section 4.1, this is not a proper restriction since the model checking problem for the full basis can be reduced to this fragment.Afterwards, we present an algorithm for model checking with the two restrictions in Section 4.2.Both subsections also prepare us for the procedure for pushdown model checking in Section 5: The reduction is suitable for both model checking variants and the pushdown model checking procedure will have the same general structure as the one for finite state systems.

Restriction of the Basis
We start this section by showing how the fair model checking problem for mumbling with full basis can be reduced to the fair model checking problem for mumbling with basis AP ′ for an extended set of atomic propositions AP ′ ⊇ AP.The reduction has the nice property that it keeps the number of successor assignments the same, which is crucial for decidability.It thus allows us to focus our efforts on developing a model checking procedure for mumbling with an atomic basis since such a procedure can be combined with the reduction to obtain a procedure for the full logic.Even though we want to solve the finite state model checking problem first, we present a more general construction that works for both Kripke structures and PDS.Our construction is inspired by a similar construction from [Bozzelli et al. 2021] and uses their idea to track the satisfaction of formulae by newly introduced atomic propositions.However, we cannot directly apply their results since (i) we need to track the satisfaction of formulae from a more expressive logic requiring a more powerful type of automaton and (ii) the reduction must also work for PDS.
Conceptually, we proceed as follows.Given a mumbling hyperproperty formula and a fair PDS (PD, ), we transform into a formula ′ over basis AP ′ for an extended set of atomic propositions AP ′ ⊇ AP and (PD, ) into a fair PDS The main idea is to track satisfaction of the formulae in base( ) by atomic propositions at ( ) in the translation.This is done by first constructing a VPA A base ( ) that ensures for every formula in base( ) that at ( ) is encountered iff indeed holds in this position of the input word of A base ( ) .We intersect this automaton with (PD, ) to obtain the system (PD ′ , ′ ) that is properly labelled with at ( ) labels.Then, we replace tests [ ] or jump criteria Δ( ) in by [at ( )] or at (Δ( )), respectively, to obtain formula ′ with basis AP ′ .
We describe the construction of a VPA A for arbitrary finite sets of closed trace formulae over AP.For this, we first introduce some notation.We expand the set of atomic propositions AP by AP := {at ( ) | ∈ } to obtain AP := AP ∪ AP and expand traces from (2 AP • {int, call, ret}) to (2 AP • {int, call, ret}) .For a word ∈ (2 AP • {int, call, ret}) , we use ( ) AP to denote the restriction of to (2 AP • {int, call, ret}) .Additionally, let cl( ) be the least set of trace formulae such that (i) ⊆ , (ii) is closed under semantic negation, that is if ∈ then ′ ∈ , where ′ is the positive form of ¬ , and (iii) if ∈ Sub( ′ ) and ′ ∈ then ∈ .
We now sketch the construction.The goal is to construct a VPA A that recognizes all traces tr with the property that for all ∈ , at ( ) holds in a position on tr iff holds on this position on the trace's restriction, (tr) AP .Depending on whether we have a PDS or Kripke structure, we construct a 2-AJA or APA first.This automaton loops on an initial state and conjunctively moves to a module checking for every atomic proposition at ( ) encountered and to a module checking ¬ ′ for every atomic proposition at ( ′ ) not encountered.These modules are constructed using established techniques for transforming fixpoint formulae into automata: We introduce a state for each ∈ cl( ).Its transition function can either check directly, if it is an atomic formula, or move to states for the subformulae of using suitable transitions, if it is not.Fixpoints introduce loops in the automaton.The priorities are assigned to reflect the nature and nesting of the fixpoints.The details of this construction are given in Appendix B.1.Note that due to Remark 3.5, we can assume base( ) to not contain formulae using a or c operators when considering the fair finite state model checking problem.Our construction introduces non global moves only for these operators, so an APA suffices in this case.Applying Proposition 2.3 or Proposition 2.4 to the automaton constructed so far, we obtain a nondeterministic automaton with the following properties: L 4.2.Given a set of closed trace formulae over AP, one can construct a VPA A over 2 AP • {int, call, ret} with a number of states exponential in |AP | satisfying: 1) for all ∈ L (A ), ≥ 0 and ∈ cl( ), we have: at ( ) ∈ ( ) iff ∈ ( ) AP .2) for each trace tr ∈ Traces, there exists ∈ L (A ) such that tr = ( ) AP .If is a set of TL formulae, then A is an NBA.
The details of the intersection of (PD, ) and A base ( ) are described in Appendix B.2.We obtain:

Fair Finite State Model Checking
Now, we show how to decide the fair model checking problem for mumbling with unique mumbling and basis AP.We borrow the idea from [Bozzelli et al. 2021] to build a Kripke structure whose traces represent summarised variants of the original Kripke structure's traces and then analyse these traces synchronously.In contrast to [Bozzelli et al. 2021], where decidability for HyperLTL model checking is obtained by reduction to the model checking problem for synchronous Hyper-LTL, we present a direct model checking procedure here.This also introduces ideas for the model checking procedure in Section 5.
We show how to check (K, ) | = for a fair Kripke structure (K, ) and a closed hyperproperty formula := . . . 1 1 .with basis AP and unique successor assignment Δ.We use to denote the subformula . . . 1 1 .with the innermost quantifiers.As special cases, we have 0 = and = .In a nutshell, we inductively construct automata A that are equivalent to the formulae in a certain sense.If the modes of progression of formulae and automata match, the notion of K-equivalence from [Finkbeiner et al. 2015] is suitable.We adapt this notion first.In this definition, trace assignments Π over T with Π( Given a set of traces T , a closed hyperproperty formula and automaton A, we call A T -equivalent to , iff for all trace assignments Π over T binding the free trace variables in , we have In our current setup, however, we deal with formulae that advance trace assignments asynchronously in accordance with a successor assignment Δ such that the modes of progression of formulae differ from that of the automata to be used.We thus define a new notion of equivalence that also respects successor assignments.In this definition, we need the notation Π Δ for a trace assignment Π that is summarised with respect to a successor assignment Δ, i.e.where all positions that are skipped by Δ are left out.For a trace formula and a trace tr, the trace summary sum (tr) is given by sum (tr) ( ) = tr (succ ( , 0)).Then, Π Δ is given by Π Δ ( ) = sum Δ ( ) (Π( )).
Definition 4.5 ((Δ, T )-equivalence).Given a set of traces T , a hyperproperty formula with unique successor assignment Δ and automaton A, we call A (Δ, T )-equivalent to , iff for all trace assignments Π over T binding the free trace variables in , we have In the case where is closed, the equivalence in this definition reduces to T | = iff ∈ L (A) for the unique word over the single letter alphabet of empty tuples.Thus, model checking a fair Kripke structure (K, ) against a formula with unique successor assignment Δ can be reduced to an emptiness test on an automaton A that is (Δ, Traces(K, ))-equivalent to .
Now that this notion is established, we present the inductive construction of the automata A .In the base case, where 0 = , we reuse an automaton construction for synchronous from [Gutsfeld et al. 2021] as the automaton A .For this purpose, we need a connection between Tequivalence and (Δ, T )-equivalence that we establish next.Let be the variant of where Δ is replaced with the synchronous successor assignment Δ = .true.Since we only have atomic tests, belongs to the synchronous fragment of from [Gutsfeld et al. 2021].
L 4.6.Let be a closed multitrace formula with unique successor assignment Δ and basis AP and let A be an automaton that is T -equivalent to for all sets of traces T .Then, A is (Δ, T )-equivalent to for all sets of traces T .
The following theorem is a combination of Theorem 5. Starting with the automaton A 0 from Theorem 4.8, we inductively construct automata A that are (Δ, Traces(K, ))-equivalent to .For ≥ 1, we have = .−1 and construct an NBA A with input alphabet (2 AP ) − from the NBA A −1 with input alphabet (2 AP ) − +1 and the structure (K, ).Note that A 0 can indeed be assumed to be given as an NBA by Proposition 2.4.Since has basis AP, we know that Δ( ) = ap for some ap ∈ AP.We transform (K, ) into a fair Kripke structure (K ap , ap ) such that Traces(K ap , ap ) = sum ap (Traces(K, )) where sum (T ) = {sum (tr) | tr ∈ T } is the straightforward extension of sum to sets.Then, we can use a standard construction for handling quantifiers as used e.g. for HyperLTL [Finkbeiner et al. 2015] with the difference that we use (K ap , ap ) instead of (K, ).In short, when is an existential quantifier, we build the product of A −1 and (K ap , ap ) and perform a projection on the components of the input alphabet other than the one representing .Universal quantifiers are handled using complementation.For this, an NBA can be interpreted as an APA, complemented without size increase, and then turned into an NBA again using Proposition 2.4.In order to avoid further exponential costs in the model checking procedure, we restrict the following theorem to formulae where the outermost quantifier is an existential one.Outermost universal quantifiers can be handled by constructing the automaton for the negation of the formula instead.The details of this construction as well as the proof of the following theorem can be found in Appendix B.4. T 4.9.Let (K, ) be a fair Kripke structure and let be a hyperproperty formula with unique successor assignment Δ, an outermost existential quantifier, basis AP and quantifier alternation depth .There is an NBA Combining the model checking procedure from this subsection with the reduction from Lemma 4.3, we obtain a model checking procedure for with full basis.From corresponding bounds for HyperLTL [Rabe 2016], we can derive matching lower bounds for the complexity of the model checking problem for fixed structure and formula, respectively.Overall, we obtain: T 4.10.The fair finite state model checking problem for alternation depth mumbling with unique mumbling is complete for EXPSPACE.For fixed formulae, the problem is ( − 1)EXPSPACE-complete for ≥ 1 and NLOGSPACE-complete for = 0.

FAIR PUSHDOWN MODEL CHECKING
Now, we tackle the fair model checking problem for pushdown systems.By the next theorem, the restriction to unique mumbling is not enough to obtain a decidable model checking problem on its own.The theorem follows from a straightforward reduction from HyperLTL model checking against PDS which is known to be undecidable [Pommellet and Touili 2018].
T 5.1.Pushdown model checking for mumbling with unique mumbling is undecidable.
Undecidability of pushdown model checking does not only apply to specially crafted formulae; it also applies to relevant information flow policies.An example is generalised non-interference, one of the information flow properties that motivated the introduction of HyperLTL [Clarkson et al. 2014].It is described by the HyperLTL formula A proof by reduction from the equivalence problem for pushdown automata can be found in Appendix C.1.T 5.2.Checking Generalised Non-Interference is undecidable for pushdown systems.
In order to regain decidability, we propose to replace the standard successor operator by wellaligned successor operators.After introducing these operators in Section 5.1, we present a corresponding model checking procedure for pushdown systems in Section 5.2.

Well-alignedness
In many applications, hyperproperties are used to specify that different executions of a system satisfying certain conditions are sufficiently similar.This is particularly the case for applications from the realm of security where hyperproperties such as Observational Determinism require that executions of a system are so similar that they are indistinguishable from the perspective of a low security user.In such situations, we expect that systems specifically crafted to satisfy these properties can be constructed such that outputs visible to the attacker are generated in the same procedures or at least at the same stack level in many cases despite the deviations of the executions induced by differences in secret data.
We develop well-aligned next operators for a precise analysis in such situations.Informally, these operators Δ coincide with the normal next operators Δ but additionally require that the subtraces that are skipped by them start on a common stack level, end on a common stack level, and the lowest stack level they encounter is the same.Nevertheless, the call and ret behaviour on different traces may differ widely, e.g. by executing procedures unmatched by the other traces between observed positions.Thus, well-alignedness still covers a wide range of interesting behaviour.In particular, for systems constructed as described above, the aligned next operator Δ coincides with the standard next operator Δ and opens the way to analyse hyperproperties for recursive systems by automatic methods.Note also that the formula wa = G Δ true (where G Δ is the well-aligned analogue to G Δ ) expresses explicitly that the traces under consideration are wellaligned with respect to Δ indefinitely.This formula can be used either to require certain properties captured by a subformula pr for well-aligned evolutions only by using wa as a pre-condition as in wa → pr or to require well-alignedness in addition to the property as in wa ∧ pr .The addition of the formula wa as a precondition or a conjunct of subformulae preserves unique mumbling such that the resulting formulae still belong to the fragment for which model checking for pushdown systems is decidable.Given these considerations and given the undecidability results for the logic with respect to pushdown systems, we believe that the approximation by well-aligned successors is a useful approach to adress recursive systems in an automated verification method for hyperproperties.
In order to formalise the notion of well-aligned traces, we define the ret-call profile of traces via a notion of abstract summarisation.Intuitively, an abstract summarisation is a sequence of transition symbols progressing a trace while taking an abstract successor whenever possible and the ret-call profile is the number of ret and call symbols left that cannot be summarised in an abstract step.Then, well-aligned traces are those that share the same ret-call profile.Formally, the abstract summarisation abssum(tr) ∈ {abs, call, ret} * of a finite trace tr is constructed from tr as described next.Let tr abs be the version of tr where every int symbol is replaced with abs.We construct a maximal sequence tr 0 , tr 1 , . . ., tr with tr 0 = tr abs and abssum(tr) = tr |ts such that for all < , tr +1 is obtained from tr in the following way: if tr = 0 , 0 , . . ., , let 1 < be the minimal index such that there is 2 > 1 with 1 = call and succ (tr , 1 ) = 2 .Then tr +1 = 0 0 . . . 1 −1 1 abs 2 2 . . . .It is easy to see that the sequence is unique and can be constructed for every finite trace.Thus, abssum(tr) is well-defined.From the definition of abstract successors, it is also easy to see that abssum(tr) is contained in the regular language (abs * ret) (abs * call) abs * for some , ∈ N 0 .We then call ( , ) the ret-call profile of tr.We define: Definition 5.3.We call finite traces tr 1 , . . ., tr well-aligned iff they have the same ret-call profile.
Intuitively, the main insight underlying our analysis is that well-aligned traces can be progressed in tandem using a single stack, even though they have different call and ret behaviour.For this, sequences of abs moves can be turned into internal steps and the different traces can synchronise their stack actions on the common ret and common call moves.
We now define a well-aligned variant, succ Δ , of the successor function succ Δ .Let Π be a trace assignment with Π( ) = tr and = ( 1 , . . ., ), ′ = ( ′ 1 , . . ., ′ ) be vectors such that ′ := succ Δ ( ) (tr , ).We define succ Δ as the partial function such that succ ] are well-aligned, and is undefined otherwise.From now on, we use a version of Δ that uses this successor operator in its semantics: Notice that this operator is not self-dual.However, we can easily introduce its dual version Δ with the following semantics: On traces generated from Kripke structures, the semantics of both these next operators coincides with that of Δ .Moreover, for formulae in positive form, replacing the standard next operator with Δ or Δ leads to formulae that under-or overapproximate the semantics of the original formula, respectively.

Fair Pushdown Model Checking
We now proceed with the model checking procedure.For this purpose, let (PD, ) be a fair Pushdown System over the stack alphabet Θ and := . . . 1 1 .be a hyperproperty formula with basis AP that uses a single successor assignment Δ and well-aligned next operators.We again write for the subformula . . . 1 1 .and have 0 = and = .Also, we again build an automaton A that is in a certain sense equivalent to in order to reduce the fair model checking problem to an emptiness test of an automaton.Here, we define a slightly different notion of equivalence compared to Definition 4.5 that also respects well-alignedness.
For this purpose, we introduce the well-aligned encoding Δ Π of a trace assignment Π. Intuitively, in addition to the propositional symbols P already occurring in the previous encoding Π , the wellaligned encoding contains ret and call symbols according to the ret-call profile of the well-aligned subtraces that are skipped by Δ as well as ⊤-symbols where these subtraces are not well-aligned.Before we can formally define this encoding, we need notation for the number of steps for which the well-aligned next operator is defined on a trace assignment Π.For this, let prog (Π, Δ, ) = (succ Δ ) (Π, (0, . . ., 0)) be the progress made by steps of the well-aligned Δ successor operator on the trace assignement Π.Note that prog (Π, Δ, ) may be undefined for certain indices .We call the supremum of the set { ∈ N 0 | prog (Π, Δ, ) is defined} the length of the Δ-well-aligned prefix of Π and denote it by wapref (Π, Δ).For a formal definition of Δ Π , let Π be a trace assignment over T with Π( ) = tr ∈ Traces, let Δ( ) = and let = tr (succ (tr , 0)).For < wapref (Π, Δ), let ( , ) be the ret-call profile of the finite trace that is skipped by step on tr 1 , i.e. the ret-call profile of tr 1 [succ 1 (tr 1 , 0), succ +1 1 (tr 1 , 0)].Since step is well-aligned, ( , ) is the ret-call profile of the finite traces corresponding to step on all other traces tr as well.Moreover, let P = ( 1 , . . ., ).We define For single traces tr, we also define tr = Δ Π where dom(Π) = { }, Π( ) = tr and Δ( ) = .For the empty trace assignment {}, we say that Δ { } is a well-aligned encoding of {} if it is contained in the language (() • {ret} * • {call} * ) .Thus, unlike trace assignments assigning at least one trace, {} has multiple encodings.Based on this encoding, we adapt our notion of equivalence between formulae and automata: Definition 5.4 (Aligned (Δ, T )-equivalence).Given a set of traces T , a hyperproperty formula with well-aligned next operators and unique successor assignment Δ as well as an automaton A, we call A aligned (Δ, T )-equivalent to , iff for all trace assignments Π over T binding the free trace variables in , we have From the second requirement, we can see that model checking a fair PDS (PD, ) against a formula can be solved by intersecting an automaton that is (Δ, Traces(PD, ))-equivalent to with an automaton for the encodings of {} and testing the resulting automaton for emptiness.
We now have the necessary tools and notation for our construction.The process is similar to that in Section 4. We first construct an APA that is aligned (Δ, Traces(PD, ))-equivalent to the inner formula and then inductively handle the quantifiers of formulae for ≥ 1.Unlike in Section 4, where we relied on a connection to synchronous formulae, we construct the automaton A explicitly here in order to cope with the distinction between well-aligned and non-well-aligned parts of the trace assignment encoded by the input word.In this construction, we do not care about the behaviour on words that do not represent well-aligned encodings as such words do not matter for aligned (Δ, T )-equivalence.
As in the construction of A from Section 4.1, we use established techniques to transform fixpoint formulae into automata and introduce a state ′ for every subformula ′ of in the construction of A .The transition function of ′ moves to states for the subformulae of ′ in a suitable manner when encountering P-symbols and skips ret-and call-symbols.In order to handle well-aligned encodings and the two variants of the next operator, we have two copies ( ′ , ) and ( ′ , ) of each state.Intuitively, the bit in a state ( ′ , ) indicates whether we accept or reject if we encounter a ⊤-symbol indicating that the next step is not well-aligned.Thus, for ′ = Δ ′′ , we transition to ( ′′ , ) to indicate that for to hold, the next step has to be well-aligned.Likewise, for ′ = Δ ′′ , we transition to ( ′′ , ) to indicate that if the next step is not well-aligned, ′ holds.The priorities are again assigned to reflect the nature and nesting of fixpoints.The details of this construction can be found in Appendix C.2. T 5.5.For any closed multitrace formula with well-aligned next operators, unique successor assignment Δ and basis AP, there is an APA A of size linear in | | that is aligned (Δ, T )equivalent to for all sets of traces T .
Similar to Section 4.2, we now handle the quantifiers and inductively construct an automaton A that is aligned (Δ, Traces(PD, ))-equivalent to .The general idea of the construction for an existential quantifier is the same as in that section: On input of an encoding wa Π of a trace assignment Π binding − trace variables, we simulate a trace tr of (PD, ) in the state space of the automaton and feed the encoding wa Π ′ of the trace assignment Π ′ = Π[ ↦ → tr] binding − + 1 trace variables into the inductively given automaton A −1 .However, there are a number of difficulties compared to the construction in the finite state case.First of all, our construction has to handle the call and ret behaviour of the system and the well-aligned encoding.We thus construct a VPA instead of an APA here.Moreover, we have to handle the fact that Π and Π ′ can be non-well-aligned from some point onward.The easier case is where the lengths of the wellaligned prefixes of Π and Π ′ coincide.In this case, we can just feed the ⊤-symbols from the input into A −1 .The more difficult case is where the length of the well-aligned prefix of Π is strictly greater than that of Π ′ .We handle this case by nondeterministically guessing a point where the next step is not well-aligned, checking that this is indeed the case by finding a call on tr matched by a ret on wa Π (or any other combination of non matching behaviour) and feeding ⊤-symbols intoA −1 .In both cases, we cannot continue simulating the stack behaviour of both wa Π and tr since the behaviour is not well-aligned.Thus, we stop simulating tr in these cases and just check that the prefix up to that point can be extended to a fair trace using Proposition 2.1.Before we perform the main construction, we need two auxiliary constructions which we present first.
First, for Δ( ) = ap, we transform (PD, ) into a pushdown system (PD ap , ap ) with PD ap = ( ap , 0,ap , ap , ap ), a structure that progresses the well-aligned encodings ap tr of traces tr from Traces(PD, ) by simulating finite traces in between inspected states based on their abstract summarisations.More precisely, a finite subtrace with ret-call profile ( , ), is simulated by first making ret-steps (each corresponding to a part abs * ret in the abstract summarisation), followed by call-steps (each corresponding to a part abs * call in the abstract summarisation) and finally one int-step (corresponding to the final abs * part in the abstract summarisation) in (PD ap , ap ).The final int-step comes in handy when reading the propositional symbols of an inspected state in the construction of A .This transformed structure is used later to obtain the encoding of Π[ ↦ → tr] for a trace tr of (PD, ) by composing wa Π with ap tr generated from (PD ap , ap ).The transformation to (PD ap , ap ) is done in two steps.We first construct an intermediate structure (PD ′ , ′ ) with two copies of each state reachable by the jump criterion ap.This structure has an int-step between the two copies in order to ensure that one step corresponding to the final abs * part of a trace's abstract summarisation is made whenever such a state is visited.In that structure, we calculate abstract successors and build ret, call and int transitions corresponding to abs * ret, abs * call and abs * parts of the abstract summarisation, respectively.A formal description is given in Appendix C.3.
Secondly, in order to check whether prefixes of paths of (PD ap , ap ) can be extended into fair paths, we use the multi-automaton A P D = ( P D , 0,P D , P D , P D ) from Proposition 2.1.Since multi-automata read stacks top-down while we build stacks bottom-up in our main construction, we will use this automaton in reverse, i.e. we will start in final states and aim to reach initial states by following its transitions backwards.For this, we assume that the automaton is reversetotal, i.e. we assume that for all ′ ∈ P D and ∈ Θ, there is a state ∈ P D such that ′ ∈ P D ( , ).Intuitively, this means that every state has a predecessor.This can be achieved easily by introducing an additional non-initial state.
Intuitively, the automaton reads an encoding Δ Π as follows: it starts in its copy wa reading the prefix containing only P, ret and call symbols (lines 1-6 in Figure 1).Here, it simulates both (PD ap , ap ) to check for an encoding of a trace tr and A −1 to check whether Δ Π ′ for the trace We use a standard construction to combine the Büchi conditions of (PD ap , ap ) and A −1 into one: A bit indicates whether we have seen a state of ap and it is reset to 0 when a state from −1 is seen.This is expressed in the formula ( , ′ , , ) and makes sure that only the runs satisfying both Büchi conditions are accepting.Additionally, we track a reverse-run of A P D in the forth component of a state.This is done by starting in a final state of A P D and updating the state to match a predecessor of the previous state whenever making a call transition.Additionally, we store the old state in the stack in order to enable backtracking of the reverse-run when making a return transition.When this reverse-run ends in state (which is checked in the conditions ), this indicates that there is a continuation into a fair path of (PD ap , ap ) starting in with the current stack content.At any point in the prefix, the automaton can nondeterministically move to its copy ua (line 1-2) to check whether there is a mismatch in the encodings of tr and Δ Π (lines 11 ff.).Here, it accepts iff A −1 accepts when reading only ⊤ symbols from this point onwards.This is checked in states ⊤ (line 9-10).Since we do not follow the existentially quantified path in this part of the automaton anymore, a transition into this part of the automaton can only be made if there is a continuation of the path into a fair path.Finally, it can also enter states ⊤ when encountering a ⊤ symbol (line 7-8) since that means that both Δ Π and Δ Π ′ are not well-aligned from this point onward.For universal quantifiers, we use complementation as in Section 4.2.For this, we use Proposition 2.2 since A is given as a VPA instead of an NBA.

P
. (Sketch) The part of the claim about the size of A can be seen by inspecting the construction.For the inner formula , we know that |A | is linear in | | for the APA A from Theorem 5.5.An alternation removal construction to transform it into an NBA increases the size to exponential in | |.Complementation constructions are performed using Proposition 2.2 for each every quantifier alternation, each further increasing the size exponentially.Finally, the size measured in |PD| is one exponent smaller since the structure is first introduced into the automaton after the first alternation removal construction.
Using the notation = . . . 1 1 .with special cases 0 = and = , we show that A is (Δ, Traces(PD, ))-equivalent to by induction on .The base case immediately follows from Theorem 5.5.In the inductive step, the more interesting case is that where is an existential quantifier since the case for a universal quantifier is a corollary from the proof for an existential quantifier.For this case, we show both directions of the required claim separately.In the first direction, we can directly use the induction hypothesis and then have to discriminate cases based on the length of the well-aligned prefixes of Π and Π[ ↦ → ] since each of these cases induces a different form for the accepting run we construct.In the other direction, we discriminate cases based on the length of the well-aligned prefix of Π and additionally on the form of the accepting run of the automaton to construct a trace tr and trace assignment Π[ ↦ → ] on which we can use the induction hypothesis.In both directions, the most interesting case is the one where the length of the well-aligned prefix of Π is strictly greater than that of Π[ ↦ → ].
Again combining the procedure from this section with the reduction from Lemma 4.3, we obtain a fair model checking procedure for PDS.Additionally, we can derive lower bounds for the complexity from finite state HyperLTL model checking [Rabe 2016] and LTL pushdown model checking [Bouajjani et al. 1997].We obtain: T 5.7.The fair pushdown model checking problem for alternation depth mumbling with unique mumbling and well-aligned successor operators is in ( + 1)EXPTIME and in EXPTIME for fixed formulae.For ≥ 1, it is EXPSPACE-hard and ( − 1)EXPSPACE-hard for fixed formulae.For = 0, it is EXPTIME-complete.

EXPRESSIVENESS OF STUTTERING AND MUMBLING
In this section, we compare the two jump mechanisms stuttering and mumbling with respect to expressiveness.It is easy to write a formula expressing that some formula from a set changes its valuation from this point on a trace to the next.This can be used to mimic the behavior of a stuttering next operator by a mumbling next operator.There is a slight mismatch in positions visited by the operators but this can be accounted for by shifting tests with a next operator.This translation can be used to obtain the following results: T 6.1.Fair pushdown and finite state model checking for stuttering can be reduced in linear time to fair pushdown and finite state model checking for mumbling respectively.L 6.2.Mumbling with unique mumbling and basis LTL (resp.full basis) is at least as expressive as stuttering with unique stuttering and basis LTL (resp.full basis).
Detailed proofs can be found in Appendices D.1 and D.2.On the other hand, there are cases where stuttering cannot mimic the behaviour of mumbling.For example, consider a trace ({ }•∅) and mumbling criterion .While mumbling visits every other position, it is easy to see that stuttering must necessarily visit every position on this trace independently of the stuttering criterion since the postfixes of this trace coincide in every other position.When considering the basis LTL, this mismatch in expressivity between the jump criteria cannot be compensated on the level of formulae.For this, consider the hyperproperty H = {T ⊆ (2 AP ) | ∀tr, tr ′ ∈ T .|{| ∈ tr ( )}| = |{ | ∈ tr ′ ( )}|} expressing that all traces of a set have the same number of -positions.We show: L 6.3.The hyperproperty H is expressible in mumbling with unique mumbling and basis AP while not expressible in stuttering with unique stuttering and basis LTL.

P
. The first part of this claim, namely expressing H in mumbling with unqiue mumbling, is straightforward and can be found in Appendix D.3.
For the second part, we first adapt some of the theorems from Section 4 to stuttering .In particular, we define (Γ, T )-equivalence in the obvious way.It is easy to see that the results of Theorem 4.8 carry over to this notion of equivalence.Additionally, we use a claim about LTL in which we write nd ( ) for the nesting depth of next operators in .

C
1.For all trace formulae ∈ with nd ( ) = and traces tr, we have ∈ tr iff + 1 ∈ tr if there is a set ⊆ AP such that ( ) = for all ≤ ≤ + + 1.
This claim can easily be established by induction (see Appendix D.3).It generalises Theorem 4.1 from the classic paper [Wolper 1981] about the expressivity of LTL.
Assume towards contradiction that there is a hyperproperty formula = . . . . 1 1 .from stuttering with unique stuttering expressing the property H . Let = . . . 1 1 for ≤ , with 0 = and = .Let Γ be the stuttering assignment used in and Γ( ) = .We say that a jump criterion ∈ 2 makes a type one step on a trace tr at position if succ ( , ) is given by the first case in the definition of succ .Similarly, we say that makes a type two step on tr at position if succ ( , ) is given by the second case.Finally, we say that makes a type one/two step (without specifying a position) if it makes a type one/two step at position 0. We Here, Property (2) follows from the fact that when makes a type one step at position , then it also makes a type one step for all earlier positions ≤ .Property (3) follows from the fact that tr maximises the quantity in (1): if Property (3) would not hold for some tr, then tr • tr would have more type one positions than tr given that (2) holds.
We transform in the same way as in the reduction presented in Section 4.1.That is, for every ∈ base( ), we introduce a fresh atomic proposition at ( ) and replace tests and stuttering criteria in with the respective atomic propositions.This yields a formula at .For such formulae, we properly label traces with these atomic propositions, i.e. we extend each position on a trace tr ∈  (2 { } ) with the set {at ( ) | ∈ Π ( ) } to obtain a trace tr at .Analogously, we define variants Π at of trace assignments Π and T at of sets of traces T .It is straightforward to see that for all trace assignments Π.It is also clear that the Γ-variant of Theorem 4.8 is applicable to at since this formula has an atomic basis.Let A thus be the automaton for at according to Theorem 4.
It is easy to see that T ∈ H while T ′ ∉ H .For any trace assignment Π over T , let Π ′ be the trace assignment defined by Below, we show by induction over that for all trace assignments Π over T and ∈ {0, . . ., }, Π | = T implies Π ′ | = T ′ .For = , this would mean that T ∈ H , i.e.T | = , implies T ′ | = , i.e.T ′ ∈ H , a contradiction.
In the base case, assume that Π | = T .By Property (4), we have (0, . . ., 0) ∈ at Π at .Since A is (Γ, T at )-equivalent to at by the Γ-variant of Theorem 4.8, we have an accepting run of A on Π Γ at .Consider this accepting run of A after − |A | − nd ( ) − 2 = |tr | + 2 • nd ( ) + 1 steps.This situation is depicted in Figure 2. On the one hand, for all ∈ TypeOnePos, the suffix left to read in component of Π Γ at is ∅ .For tr at 0 , this is due to the fact that by Claim 1 and Property (2), it takes at most nd ( ) applications of succ to move over the prefix { } , by the same argument it takes at most nd ( ) applications of succ to move over ∅ and finally, it takes at most |tr | applications of succ to move over tr .The argumentation for tr at 1 is analogous.This case is represented in lines one and three in Figure 2. On the other hand, for all ∈ TypeTwoPos, the suffix left to read in component of This is due to the fact that by Property (3), makes only type two steps on tr 0 and tr 1 .This case is represented in lines two and four in Figure 2. Thus, during the next |A | + 1 steps, the automaton reads the same symbols on all traces: For all ∈ TypeOnePos, there are only ∅-symbols in these positions on both tr 0 and tr 1 , which by Claim 1 and the fact that the suffix is ∅ all have the same extended labelling in tr at 0 and tr at 1 .For all ∈ TypeTwoPos, there are only { }-symbols on tr 0 and only ∅-symbols on tr 1 in these positions.By Claim 1 and the fact that the next nd ( ) + 1 positions after these steps are also { }or ∅-symbols on tr 0 and tr 1 , respectively, the extended labelling in tr at 0 and tr at 1 is the same for these steps as well.During these |A | + 1 steps where the same symbol is seen on each trace, at least one state of A is visited twice.Let be the number of steps between the two visits of .We add |A |! { }-positions to the { }-prefix of tr 0 to obtain tr ′ 0 and |A |! ∅-positions to the ∅-prefix of tr 1 to obtain tr ′ 1 .This situation is depicted in Figure 3. Since |A |! is a multiple of , we do not change the acceptance of A as the run can repeat the a er the application of the pumping argument.We have four cases for component of the input word where Π( ) = tr 0 or Π( ) = tr 1 and ∈ TypeOnePos or ∈ TypeTwoPos.Loops in the automaton are added when reading the red areas (areas with solid border).The additional symbols in the blue areas (areas with do ed border) are skipped when the traces are progressed with type one steps.The extended labelling of the additional positions is the same as that of the position directly before them since there are at least nd ( ) + 1 successive positions with the same symbol.

loop from to
| A |! times and then proceed as before: By the same argument as before, the extended labelling on the added positions is the same as on the position directly after.Thus, (i) for ∈ TypeOnePos, the same number of applications of succ as before are needed to skip over { } +| A |! or ∅ +| A |! due to Claim 1, thus the run is again in the ∅-suffix after |tr | + 2 • nd ( ) + 1 steps where A can loop from to without changing the suffix of the trace to be processed and (ii) for ∈ TypeTwoPos, the loops from to read exactly the additional symbols.The parts of the traces where these loops are taken are marked in red in Figure 3 (areas with solid border).Consequently, we have an accepting run of A over Π ′Γ at and conclude Π ′ | = T ′ by again using Property (4) and the fact that A is also (Γ, T ′ at )-equivalent to at .The inductive step considers the quantifiers 1 , . . ., and follows straightforwardly from the semantics of quantifiers and the induction hypothesis.
Combining Lemma 6.2 and Lemma 6.3, we obtain: T 6.4.Mumbling with basis LTL and unique mumbling is strictly more expressive than stuttering with basis LTL and unique stuttering.
As simple HyperLTL , the decidable fragment of HyperLTL from [Bozzelli et al. 2021], can straightforwardly be embedded into stuttering with unique stuttering, these results also directly imply that the hyperproperty H is not expressible in simple HyperLTL and that mumbling with unique mumbling is strictly more expressive than simple HyperLTL .Surprisingly, the lower expressivity of stuttering can be compensated exploiting the power of fixpoints: L 6.5.Stuttering with unique stuttering and full basis is at least as expressive as mumbling with unique mumbling and full basis.

P
. We show this lemma by presenting a translation from a mumbling formula with unique mumbling to an equivalent stuttering formula ˆ with unique stuttering.Let = . . . . 1 1 .be a mumbling formula with unique mumbling using the successor assignment Δ.We assume is in positive form, i.e. negation occurs only in front of tests in .As in other proofs, we define = . . . . 1 1 .with 0 = and = .We assume w.l.o.g. that every test in is either only applied on the first position of a trace or only on later positions.This can be achieved by unrolling fixpoints so that all tests are either unguarded (and thus only apply to the first position) or in scope of at least one Δ operator (and thus only apply to later positions).

on . We introduce
′ as an abbreviation for the trace formula (¬ U ( ∧ ′ )).Intuitively, ′ asserts that (i) there is a future position where holds and (ii) ′ holds at the next position.For ∈ {1, . . ., }, we define formulae 0 , and ˜ which we explain later: We set Γ( ) = { 0 , 1 , . . ., , ˜ 1 , . . ., ˜ } and replace every test [ ] in scope of a Δ operator by [(¬ U ( ∧ )) ∨ ( ∧ ¬F )] .After doing so for all trace variables , we replace every next operator Δ with Γ , obtaining a multitrace formula ˆ .ˆ is then given as . . . . 1 1 .ˆ .The equivalence of and ˆ follows from the following claim in which ˆ is defined analogous to : C 2. For all sets of traces T and trace assignments Π over T , A formal proof of this claim by induction on can be found in Appendix D.4.Here, we explain the intuition of the translation.First, consider a trace where is true on a finite number of positions.This case is illustrated in Figure 4. On such traces, the formulae or ˜ do not change their valuation since their first conjunct is never fulfilled.We thus use 0 to progress on such traces.Intuitively, the formula expresses that (i) there are only finitely many -positions on the trace (expressed by F G¬ ) and (ii) there is an odd number of -positions after the current position (expressed by the fixpoint formula .((G¬ ) ∨ ( ))).In this formula, G¬ identifies the positions with exactly one -position after them.Additionally, the use of in each fixpoint iteration advances by two -positions and thus expresses that a position satisfying the fixpoint is an even number of -positions away from the base case.Since the number ofpositions left on the trace is decreased by one whenever a -position is encountered, 0 changes its valuation exactly at the positions where holds.
Next, consider a trace where holds infinitely often.This case is illustrated in Figure 5. On such traces, the formula 0 does not change its valuation since the first conjunct is never fulfilled.Here, we use formulae and ˜ to progress on the trace.For these formulae, the first conjunct GF is used to identify the case that there is an infinite number of -positions.Additionally, we have: (1) is satisfied on positions with an even number of positions that satisfy ∧ after the current position and before the next position satisfying ¬ ∧ .The base case of the fixpoint formula ( ¬ ) identifies positions with no further -positions between them and the next position where ¬ ∧ is true.Each fixpoint iteration (by ( ∧ ( ∧ ))) advances by two -positions where is true as well.
(2) Analogously, ˜ is satisfied on positions with an even number of positions that satisfy ¬ ∧ after the current position and before the next position satisfying ∧ .
As a consequence of ( 1) and ( 2), either or ˜ changes its value on all -positions if there are infinitely many -positions where holds and infinitely many -positions where ¬ holds, i.e. the valuation of on -positions changes infinitely often.If this is not the case, i.e. if the valuation of is constant for all ∈ {1, . . ., } on -positions from some point onward, or ˜ change value on all -positions up to that point.Thus, Γ advances a trace exactly like Δ except in situations where there are infinitely many -positions and the valuation of all tests is constant on positions on the suffix of the trace.However, in this case we can use the fact that the valuation for all tests is constant and perform future tests on arbitrary -positions.This is done by replacing tests [ ] by [(¬ U ( ∧ )) ∨ ( ∧ ¬F )] .The disjunct (¬ U ( ∧ )) is equivalent to if the stuttering assignment has correctly advanced to a -position and tests at the next -position if the stuttering assignment has not correctly advanced.Additionally, ( ∧ ¬F ) accounts for the tests that are performed on the suffix where does not hold in the case with finitely many -positions.
From Lemma 6.2 and Lemma 6.5, we conclude: T 6.6.Mumbling with unique mumbling and stuttering with unique stuttering are expressively equivalent.In [Gutsfeld et al. 2021] the logic for asynchronous hyperproperties was introduced.It is based on the linear time -calculus TL with an asynchronous notion of progress on different paths and is one inspiration for the logic presented in this paper.However, does not include abstract modalities or a jump mechanism and has only been considered on finite models.The same holds true for the logics presented in [Baumeister et al. 2021;Bonakdarpour et al. 2020] that make use of trajectories to model asynchronous progress.Bozzelli et al. [Bozzelli et al. 2021] recently introduced an asynchronous variant of HyperLTL based on a mechanism to specify an indistinguishability criterion for positions on traces, another inspiration for the logic in the current paper.Another logic for asynchronous hyperproperties is observation-based HyperLTL [Beutner and Finkbeiner 2022].The concept of observation points in that logic is very similar to our notion of mumbling.However, due to a different choice of infinite state system model and verification technique, precise decidability or complexity results cannot be provided in [Beutner and Finkbeiner 2022] whereas our work does.Additionally, they do not consider the expressiveness of different jump criteria.In [Bozzelli et al. 2022], different asynchronous hyperlogics are compared with respect to expressivity.As opposed to the study of expressiveness in this paper, [Bozzelli et al. 2022] compares the unrestricted versions of the logics rather than focussing on decidable fragments.

RELATED WORK
There are only two other approaches for model checking hyperlogics against pushdown models that we are aware of.The approach of [Pommellet and Touili 2018] consists of model checking HyperLTL against a regular over-or underapproximation of the pushdown model.This approach, however, considers neither asynchronicity nor non-regular modalities and its restrictions are unrelated to the notion of well-alignedness we introduce.The other approach, [Bajwa et al. 2023], uses quantification over stack access patterns to align the stack actions of different traces.A preprint version of the current paper is discussed in the related work section of [Bajwa et al. 2023] which suggests that their approach might be inspired by the notion of well-aligned modalities.They only cover synchronous hyperproperties where a common stack access pattern corresponds to a special case of our notion of well-alignedness.
Finally, there are two approaches to hyperlogics that are orthogonal to the one using named quantifiers and thus only indirectly related to the current work.In logics with team semantics [Gutsfeld et al. 2022;Krebs et al. 2018;Virtema et al. 2021], a formula is evaluated over multiple traces (teams) at once instead of only a single one.The adoption of team semantics seems to lead to logics expressively incomparable to our approach.Other logics add an equal-level predicate to first-and second-order logics [Coenen et al. 2019;Finkbeiner 2017;Spelten et al. 2011].The work [Coenen et al. 2019] discovered that these logics can be placed in an expressiveness hierarchy with synchronous hyperlogics with trace quantification while the work [Bozzelli et al. 2022] suggests that this may not be the case for asynchronous hyperlogics with trace quantification.Finally, we note these two approaches also have not yet been considered for the verification of recursive programs.

CONCLUSION
We proposed a novel logic for the specification and verification of asynchronous hyperproperties.In addition to other extensions, the logic provides a new jump mechanism on traces that is simpler yet more expressive for LTL jump criteria than a related mechanism used by the logic HyperLTL .Under an assumption necessary for decidability, we provided a model checking algorithm for both finite and pushdown models, the first model checking algorithm for asynchronous hyperproperties on pushdown models.For the finite state case, the complexity of the model checking procedure coincides with that of simple HyperLTL despite the increased expressiveness.For the pushdown case, we introduced a concept called well-alignedness as an enabler for decidability.The ability to model check pushdown systems in conjunction with the ability to handle asynchronicity and the abstract, non-regular modalities renders our algorithm a promising approach for automatic verification of hyperproperties on recursive programs.

A APPENDIX TO SECTION 3 A.1 Definition of Fixpoint Alternation Depth
In the construction of A from Section 4.1 and the construction of A from Section 5.2 as well as the associated lemmas, we need a notion of fixpoint alternation depth for trace and multitrace formulae.Fixpoint alternation depth is a well-established measure for the complexity of nested fixpoint formulae and often, as in our case, a parameter for the complexity of algorithmic constructions for fixpoint formulae.In a formula , we say that the variable ′ depends on the variable , written ≺ ′ if is a free variable in fp( ′ ) where fp( ′ ) is the unique fixpoint binding ′ .We write < to denote the transitive closure of ≺ .Then, the alternation depth ad ( ) is the length of the longest chain 1 < • • • < such that adjacent variables have a different fixpoint type.For example, let ( ) be a trace formula in which the variables in the list occur freely.Then, .(( .( )) ∧ ′ ( )) and . .( , ) have fixpoint alternation depth 1 while . . .( , , ) has fixpoint alternation depth 3. We extend this notion to finite sets of trace formulae: ad ( ) = max{ad ( ) | ∈ }.For multitrace formulae , the notion is extended straightforwardly but considers only fixpoints .′ or¸ .′ in and not the fixpoints in the base formulae of .

A.2 Formal Results about the Fixpoint Semantics in Subsection
is monotone for all V, and in positive normal form.
. tr V is the least fixpoint of .It can be characterised by its approximants In this construction, we write ad ( ) for the fixpoint alternation depth of , defined in the usual way (see e.g.[Demri et al. 2016] or Appendix A.1), and extend this notion to sets: ad ( ) = max{ad ( ) | ∈ }.
Given a set of trace formulae , we construct the 2-AJA A over 2 AP ∪{int, call, ret} that ensures that ( ) holds in a position on a trace from (2 AP • {int, call, ret}) if and only if holds on this position on the trace's restriction to (2 AP • {int, call, ret}) .The alphabet 2 AP ∪ {int, call, ret} is divided into three parts in the obvious way: Σ i = 2 AP ∪ {int}, Σ c = {call} and Σ r = {ret}.The automaton is given as ( , 0, , , Ω ) where := { | ∈ ( )} × {0, 1} ∪ { 0 } × {0, 1} and 0, = {( 0 , 0)}.We have two copies of each state to deal with the fact that the input words we are interested in alternate between symbols from 2 AP and symbols from {int, call, ret}.The For the priority assignments, we always assign Ω(( , 0)) = Ω(( , 1)) and thus omit the second component of each state in the description.The priority assignment Ω for the initial state 0 is given as Ω ( 0 ) := 0 whereas for the other states , it is defined depending on the structure of .We first assign priorities for fixpoint variables and fixpoints, that is for ∈ { , .′ , .′ }.
We do so by inspecting all maximal chains 1 < ′′ • • • < ′′ (where adjacent variables do not necessarily have different fixpoint types) for formulae ′′ ∈ and assigning proiorities to the first variable based on the fixpoint type: greatest fixpoints and their variables get priority 0 and least fixpoints get priority 1.Then, we move through the chains and assign this priority as long as the fixpoint type does not change.In that case, we increase the currently assigned priority by one and keep going.For all other states, let be the highest priority assigned so far.Then, we assign for ∉ { , .′ , .′ }.Notice that when ad ( ) = 1 for all ∈ , we only need priorities 0 and 1 and A is an Alternating Büchi Automaton (ABA), i.e. an APA with only priorities 0 and 1.For ABA, there is a variant of Proposition 2.4 that allows dealternation into an automaton with size 2 O ( ) instead of 2 O ( •log( ) • ) .This concludes the construction of A .
For the proof of Lemma 4.2, we need some additional notation.Given an automaton A, a state in A and a set of indices , we use A [ : ] to denote an automaton that behaves exactly like A except for the state where it accepts iff the run is currently at an index from the set .
The second part of Lemma 4.2 can be shown constructively.Given a trace tr, is obtained by amending every position of tr with the set of atomic propositions { ( ) | ∈ tr }.For the first part, let A be the subautomaton of the 2-AJA version of A with only the states for subformulae of .In this automaton, states for free fixpoint variables may have undefined transition behaviour, but we circumvent this by filling these states using the notion A [ : ].For these automata A , we show a result stronger than the first part of Lemma 4.2 that can be shown inductively since it also applies to formulae with free fixpoint variables.Part one of Lemma 4.2 then follows immediately from this claim and Proposition 2.3.In particular, we show: C 3. Let be a set trace formula over AP with free fixpoint variables 1 , . . ., and let A be the automaton as described above.Furthermore, let V be a fixpoint variable assignment, ∈ (2 AP • {int, call, ret}) be an input word and ≥ 0 be an index.Then, Claim 3 is shown by a straightforward structural induction on .As our construction uses an established technique to transform fixpoint formulae into automata, this part of the proof follows the associated proof technique as performed e.g. in the proof of Theorem 4.7 which was conducted in [Gutsfeld et al. 2021].
The most interesting case is that of fixpoints, where we use the fact that states for least fixpoints and their fixpoint variables can only be visited finitely many times while states for greatest fixpoints and their fixpoint variables may be visited infinitely often due to their priority.From this, it can be shown that the set of indices from which the automaton A has an accepting (( , 0), )-run on can be expressed as a least or greatest fixpoint, respectively, of a function : ↦ → { | A [ 1 : V ( 1 ), . . ., : V ( ), : ] has an accepting (( , 0), )-run on }.This fixpoint can then be compared to the semantics of the formula using its characterisation by approximants from Corollary A.2.
Using Claim 3 and inspecting the initial state of the 2-AJA A , it straightforward to see that it fulfills the first part of Lemma 4.2.It is also straightforward to see that if contains only TL formulae, A is an APA.The claim that it is also possible to construct a VPA/NBA of the claimed size then follows immediately from Proposition 2.3/Proposition 2.4.
B.2 Detailed Construction of (P D ′ , ′ ) from Subsection 4.1 The pushdown system PD ′ = ( ′ , ′ 0 , ′ , ′ ) with a labelling over AP and target states ′ is given as the product of PD = ( , 0 , , ) with target states and the VPA A = ( , 0, , , ).The stack alphabet Θ of PD ′ is given as Θ 1 × Θ 2 where Θ 1 is the stack alphabet of PD and Θ 2 is the stack alphabet of A .In order to improve readability in the definition of the transition relation, we write ( , ) and ⊆ AP with ( , ′ ) ∈ and ′ ∈ ( , ( ∪ ( ), int))).Similarly, we write ( , ) where with ≠ iff = 0 and ∈ or = 1 and ∈ .As target states ′ , we have: Intuitively, the four components of the structures' states play the following roles: The first and second components are used to build a product of PD and A .The third component is used to properly extend the labelling from one only assigning AP labels to one assigning AP labels in a consistent manner.The last component ist used to combine the fairness condition of (PD, ) with the acceptance condition of A .Here, we apply the standard idea for combining Büchi acceptance conditions: The transition relation switches from copy 0 to copy 1 when a state ∈ is encountered and from copy 1 to copy 0 when a state ∈ is encountered.Thus, paths visiting the target states ′ visit both original targets infinitely often.
Case 7: = .1 .We use a fixpoint approximant characterisation of and and write We then show by transfinite induction over , that ( , . . ., ) ∈ (∅) iff prog(Π, Δ, ) ∈ (∅).To avoid confusion, we will write (SIH) for the induction hypothesis of the structural induction and (TIH) for the induction hypothesis of the transfinite induction.The base case = 0 follows directly from (SIH) if we can establish that the assumption from the lemma holds for W [ ↦ → ∅] and W ′ [ ↦ → ∅].For all ′ ≠ , the assumption follows from the fact that it holds for W and W ′ .For ′ = , the assumption follows from the fact that is mapped to ∅ in both vector fixpoint variable assignments.In the inductive step ↦ → + 1, we use (TIH) to establish that the claim holds for .Thus, the lemma's assumption holds for W [ ↦ → (∅)] and W ′ [ ↦ → (∅)] and we can use (SIH) to establish the claim for + 1.Finally, the limit case < ↦ → follows directly from (TIH).
B.4 Detailed Construction of A from Subsection 4.2 and Proof of Theorem 4.9 Using the automaton A from Theorem 4.8, we inductively construct an automaton A that is (Δ, Traces(K, ))-equivalent to by adding a technique to handle the quantifiers.Recall that = . . . 1 1 . .We write for the formula . . . 1 1 .and have special cases 0 = and = .The construction is performed inductively.When adressing the quantifier , i.e. when handling the formula = .−1 for ≥ 1, we construct the automaton A with input alphabet (2 AP ) − from the automaton A −1 with input alphabet (2 AP ) − +1 and the structure (K, ).In the definition of (Δ, T )-equivalence, A is expected to read an encoding of Π Δ .Thus, the quantifier will be handled by introducing traces summarised with respect to Δ to the automaton.We assume that A −1 is given as an NBA ( −1 , 0, −1 , −1 , −1 ) over the input alphabet (2 AP ) − +1 .This can generally be assumed due to Proposition 2.4.
Since our basis is AP, we can assume Δ( ) = ap for some ap ∈ AP.We construct a fair Kripke structure (K ap , ap ) whose traces represent traces of (K, ) with K = ( , 0 , , ) summarised by the atomic successor formula ap.For this construction, we assume that initial states in K are isolated, i.e. that there are no transitions ( , 0 ), ( , 0 , ) or ( , , 0 ) for all 0 ∈ 0 .This can be achieved by creating copies of the initial states with no incoming transitions as new initial states without changing the set of traces of the Kripke structure.Let ℓ = 0 ∪ { ∈ \ 0 | ap ∈ ( )} and ℓ = { ∈ \ 0 | ap ∉ ( )} be a partition of , i.e. = ℓ ∪ ℓ .Intuitively, due to our assumption on the isolation of initial states, ℓ contains the states that can be visited with with the successor formula ap while ℓ contains the states that are skipped as long as ap-successors exist.In traces where ap does not hold from a certain point, states from ℓ are visited up until that point and states from ℓ are visited afterwards.For , ′ ∈ ℓ , we write → ap ′ if there is a path = 1 , 2 , . . ., −1 , = ′ in K such that ( , +1 ) ∈ for all 1 ≤ ≤ − 1 and ∈ ℓ for all 2 ≤ ≤ − 1.If additionally ∈ for some 2 ≤ ≤ , we write → ap, ′ .Then, K ap is given as ( ap , 0,ap , ap , ap ) where: The set of target states ap is given as ( ℓ × {1}) ∪ ( ℓ ∩ ).Intuitively, traces in (K ap , ap ) simulate summarised versions of traces in (K, ) in the following way: A trace starts in states ℓ × {0, 1} where it remains as long as ap-labelled states are seen in the simulated trace.If the simulated trace contains infinitely many ap-successors, it remains in this part of the structure indefinitely.
Otherwise, it switches to states ℓ at the first point without an ap successor and remains in the part of the structure where ap-labelled states cannot be seen anymore.Switches between 0 and 1 states in ℓ × {0, 1} are made to make the simulated trace's visits to target states not labelled ap visible.For this structure, we have Traces(K ap , ap ) = sum ap (Traces(K, )).
Our construction for A = ( , 0, , , ) uses a common way to handle quantifiers.The only two differences to the standard constructions used e.g. for HyperLTL in [Finkbeiner et al. 2015], HyperPDL-Δ in [Gutsfeld et al. 2020] or in [Gutsfeld et al. 2021] are that (i) instead of building the product automaton of A −1 and K, we construct the product of A −1 and (K ap , ap ) and thus have to combine two Büchi acceptance conditions and (ii) we have a different input alphabet.In the following construction, we write ( 1 , . . ., −1 ) ∈ (2 AP ) − as P and we write ( 1 , . . ., − , ) ∈ (2 AP ) − +1 as P + .In order to improve readability, we write ( , ap ) → P ( ′ , ′ ap ) for ′ ∈ ( , P + ( ap )) and ( ap , , ′ ap ) ∈ ap .For = ∃, A is given as follows: As for other hyperlogics using path or trace quantifiers, universal quantifiers = ∀ are handled by using automata complementation and the fact that a universal quantifier ∀ can be expressed as ¬∃¬ in logics.Generally, such negations can then be handled by complementing the automaton constructed so far, introducing an exponential blowup of its size due to Proposition 2.4.There are some exceptions, where this can be avoided, however.After the substitution of ∀ with ¬∃¬ has been performed in , double negations can be cancelled out.Also, if a negation is introduced at the start or end of the quantifier prefix in this manner, it can be handled easily.An inntermost negation can be handled by constructing the automaton for the negation normal form of ¬ instead of constructing the automaton for and then complementing it.An outmermost negation can be handled by negating the result of the emptiness test on the automaton for instead of constructing the automaton for ¬ and then testing for emptiness.The remaining negations each correspond to a quantifier alternation in the original formula and thus increase the size of the automaton exponentially for each such quantifier alternation.Also note that the general way to combine different Büchi conditions used in the construction for a single quantifier would induce an exponential blowup in the number of quantifiers if done inductively, even when no quantifier alternations are present.This can, however, be avoided by constructing states × ( ap ) × {0, 1, . . ., } instead of states × ( ap × {0, 1}) to combine + 1 Büchi conditions when handling consecutive quantifiers of the same type.In this altered construction, the size increase due to the combination of Büchi conditions is only polynomial and does not change the size of the final automaton asymptotically.

P
T 4.9.The part of the claim about the size of A can be seen by inspecting the construction.For the inner formula , we know that |A | is linear in | | for the APA A from Theorem 4.8.An alternation removal construction to transform it into an NBA increases the size to exponential in | |.Complementation constructions are performed corresponding to every quantifier alternation, each further increasing the size exponentially.For this, we can interpret an NBA as an APA, complement it without an increase in size, and then transform it into an NBA again with Proposition 2.4.Finally, the size measured in |K | is one exponent smaller since the structure is first introduced into the automaton after the first alternation removal construction.
The part of the claim about (Δ, Traces(K, ))-equivalence is shown by induction.In order to improve readability, let T = Traces(K, ) in the remainder of the proof.Using the notation = . . . 1 1 .with special cases 0 = and = , we show that A is (Δ, T )-equivalent to by induction on .The base case follows from Theorem 4.8.In the inductive step, assume that the claim holds for −1 .We now show that it holds for as well.
There are two cases based on the form of the outermost quantifier .The case for a universal quantifier follows from the case for an existential quantifier and the fact that complementation on automata corresponds to negation on formulae.For the case of an existential quantifier, we have = ∃ .−1 and Δ( ) = ap for some ap ∈ AP.From the induction hypothesis, we know that −1 is (Δ, T )-equivalent to A −1 .Let Π be an arbitrary trace assignment over T binding the free trace variables in .We show both directions that are required for (Δ, T )-equivalence separately.
For the first direction, assume that Π | = T .This means there is a trace tr ∈ T such that . Furthermore, we know that sum ap (tr) is a trace in (K ap , ap ) since Traces(K ap , ap ) = sum ap (T ).Thus, we obtain a run of A on Π Δ by simulating sum ap (tr) in the second component and simulating the run of A −1 on Π Δ in the first component.It is an accepting run since both the fairness condition of (K ap , ap ) and the Büchi condition of A −1 are satisfied and thus an accepting state of A is visited infinitely often.
For the other direction, assume that Π Δ ∈ L (A ).From the second component of the states of this run, we can extract a trace tr ′ ∈ Traces(K ap , ap ).We know that the trace must be a fair trace since accepting runs of A visit the target states of (K ap , ap ) infinitely often.Since Traces(K ap , ap ) = sum ap (T ), we know that tr ′ = sum ap (tr) for some trace tr ∈ T .From the first component of the states of the run, we can extract a run of A −1 on Π ′Δ where Π ′ is the trace assignment Π[ ↦ → tr].We know that it is an accepting run since accepting runs of A visit the accepting states of A −1 infinitely often.Since A −1 is (Δ, T )-equivalent to −1 , we know that Π ′ | = T −1 .This witnesses Π | = T .

B.5 Proof of Theorem 4.10
For the proof of Theorem 4.10, we formulate three additional theorems for upper and lower bounds: T B.1.Fair model checking a mumbling hyperproperty formula with basis AP and unique mumbling against a fair Kripke structure (K, ) is decidable in EXPSPACE where is the alternation depth of the quantifier prefix of .For fixed formulae, it is decidable in ( − 1)EXPSPACE for ≥ 1 and in NLOGSPACE for = 0.

P
. This follows immediately from Theorem 4.9 and Proposition 2.5.We can test the NBA A (or A ¬ for an outermost universal quantifier) of size O( ( + 1, | | + log(|K |))) for emptiness in nondeterministic space logarithmic in its size to solve the model checking problem.Savitch's theorem gives us membership in the corresponding deterministic space classes.

T
B.2. Fair model checking a mumbling hyperproperty formula with full basis and unique mumbling against a fair Kripke structure (K, ) is decidable in EXPSPACE where is the alternation depth of the quantifier prefix of .For fixed formulae, it is decidable in ( − 1)EXPSPACE for ≥ 1 and in NLOGSPACE for = 0.

P
. Follows from Lemma 4.3 and the proof of Theorem B.1.More precisely, in Lemma 4.3, the translation of is linear in size and the exponential blowup of K is only in the size of .Moreover, the size of the automaton constructed in Theorem B.1 is one exponent larger when measured in | | compared to the size when measured in |K |.Thus, the automaton that is constructed does not asymptotically increase in size compared to the proof of Theorem B.1.

T B.3.
The fair finite state model checking problem for a mumbling hyperproperty formula with unique mumbling and Kripke structure K is hard for EXPSPACE.For fixed formulae, it is ( − 1)EXPSPACE-hard for ≥ 1 and NLOGSPACE-hard for = 0.

P
. It is easy to see that HyperLTL is subsumed by mumbling with unique mumbling.Thus we can show the lower bound by a reduction from the HyperLTL model checking problem for which hardness was shown in [Rabe 2016].
With the help of these, we obtain a simple proof: P T 4.10.Follows directly from Theorem B.2 and Theorem B.3.

C APPENDIX TO SECTION 5
C.1 Proof of Theorem 5.2 P .The property generalised non-interference is given by the HyperLTL formula where encodes a set of low security variables and encodes a set of high-security variables.It states that for all pairs of traces 1 , 2 , there is a third trace 3 agreeing with 1 on the low-security variables from and agreeing with 2 on the high-security variables from .We show that it is undecidable to check PD | = GNI for pushdown systems PD via a reduction from the equivalence problem for pushdown automata.
Let A 1 and A 2 be pushdown automata recognizing languages L 1 and L 2 respectively.We construct a system PD such that PD | = GNI iff L 1 = L 2 .Specifically, PD contains a copy of A 1 and A 2 with a nondeterministic choice to move to either automaton at the start.Transition symbols of the automata are encoded in low-security variables while a high-security bit (with = { }) indicates whether PD follows a trace from A 1 or from A 2 .
For the first direction, assume that PD | = GNI .We choose arbitrary traces tr 1 from the copy of A 1 and tr 2 from the copy of A 2 .If we bind tr 1 to 1 and tr 2 to 2 , we know that there is a trace tr 3 (bound to 3 ) that agrees with tr 1 on low-security variables and with tr 2 on high-security variables.The first of these two conditions ensures that tr 3 encodes the same word as tr 1 , i.e. a word ∈ L 1 .The second of the two conditions ensures that tr 3 is a trace from the copy of A 2 , from which we infer ∈ L 2 .Since tr 1 was chosen as an arbitrary trace from A 1 , we conclude L 1 ⊆ L 2 .By swapping the roles of tr 1 and tr 2 , we can show L 2 ⊆ L 1 analogously.
For the other direction, assume that L 1 = L 2 .We show that PD | = GNI by discriminating cases for the choice of traces for the first two quantifiers.If both quantifiers choose a trace from the same automaton, then 3 can be chosen as the trace bound to 1 .Then, 1 and 3 agree on since they are the same trace and 2 and 3 agree on since both traces are in the same automaton.We thus know PD | = GNI in this case.In the other case, the quantifiers choose traces from different automata.Assume wlog.that 1 binds a trace from A 1 and 2 binds a trace from A 2 .The other case is analogous.Let ∈ L 1 be the word encoded by the trace bound by 1 .Since L 1 = L 2 , we know that ∈ L 2 and there is a trace tr in A 2 encoding .We choose tr for 3 .As in the first direction, we know that 3 agrees with 1 on the low-security variables since they encode the same word and that 3 agrees with 2 on the high-security variables since they bind traces from the same automaton.We conclude PD | = GNI in this case as well.For the other input symbols, we define (( ′ , ), ⊤) = true, (( ′ , ), ⊤) = false and (( ′ , ), ) = ( ′ , ) for ∈ {call, ret}.For the priority assignment, we set Ω (( , )) = Ω (( , )) and thus omit the second component of each state in the description.The process is similar to that in the construction of A in Section 4.1 (resp.Appendix B.1): we first assign priorities for states ′ where ′ is a fixpoint variable or fixpoint formula.We assign greatest fixpoints and their variables even priorities and least fixpoints odd priorities, starting at 0 and 1, respectively, for outermost fixpoints and increasing by one for each fixpoint alternation.For all other states, we assign Ω ( ′ ) = max , where max is the highest priority assigned so far.

C.2 Detailed
Intuitively, being in a state ( , ) means that we are currently checking the formula with bit indicating whether we accept or reject if we encounter a ⊤-symbol.For = Δ ′ , we set = to indicate that for to hold, the next step has to be well-aligned.Likewise, for = Δ ′ , we set = to indicate that if the next step is not well-aligned, holds.The priorities are assigned to reflect the nature of fixpoints.Odd priorities for least fixpoints reflect that these states may only be visited a finite number of times unless they are nested within a greatest fixpoint that is also visited infinitely often on that path.Similarly, even priorities for greatest fixpoints reflect that these states may be visited infinitely often.Assigning lower priorities to outer fixpoints reflects that these fixpoints take precedence over the fixpoints that are nested in them.
For the proof of Theorem 5.5, we need to formulate a lemma.Intuitively, it tells us that if Π has a well-aligned prefix of finite length, the semantics of a formula can be characterised by a variant of the formula that has no fixpoints.Let for ∈ N 0 be recursively defined as follows: 0 replaces all Δ ′ subformulae in with false as well as Δ ′ subformulae with true and +1 is obtained from by replacing every subformula ′ of which is directly in scope of an outermost Δ or Δ operator by ′ .For this, fixpoints are unrolled times for .We formulate the following lemma: L C.1.Let be a multitrace formula with unique successor assignment Δ, Π be a trace assignment with wapref (Π, Δ) ≠ ∞ and W be a fixpoint variable assignment.For ≤ wapref (Π, Δ), let = wapref (Π, Δ) − .Then for all ≤ wapref (Π, Δ), prog (Π, Δ, ) ∈ . By induction on := wapref (Π, Δ).In the base case = 0, the Δ-well-aligned prefix of Π has length 0 and 0 replaces all Δ subformulae with false as well as Δ subformulae with true.The claim can be seen straightforwardly, since succ Δ (Π, (0, ..., 0)) is undefined which makes the semantics of all Δ subformulae of equivalent to false and all Δ subformulae of equivalent to true.
In the inductive step ↦ → + 1, assume that the claim holds for .We have wapref (Π, Δ) = + 1 and +1− has a nesting depth of + 1 − for Δ and Δ operators.In particular, +1− is obtained from by replacing every subformula ′ of which is directly in scope of an outermost Δ or Δ operator by ′ − .Let Π ′ be a variant of the trace assignment Π in which the subtraces skipped by the first application of succ Δ are removed.Analogously, let W ′ be the fixpoint variable assignment where indices are shifted according to the first application of succ Δ .This means that wapref (Π ′ , Δ) = .For subformulae ′ , the trace assignment Π ′ and the fixpoint variable assignment W ′ , we can use the induction hypothesis and obtain prog (Π ′ , Δ, W ′ for all ≤ .Thus, since prog (Π ′ , Δ, ) in Π ′ and W ′ corresponds to prog (Π, Δ, + 1) in Π and W, we directly obtain prog (Π, Δ, ) ∈ W for ≤ and the analogous claim for Δ ′ subformulae.Also, for = + 1, we obtain the same claim with a similar argument as in the base case.Using this, a straightforward induction on the structure of yields prog (Π, Δ, ) ∈ For the proof of Theorem 5.5, we need a stronger version of aligned (Δ, T )-equivalence for multitrace formulae that enables an inductive proof.As in the proof of Lemma 4.2, we use the notion A [ : ] for an automaton with the same behaviour as A except for in state , where it accepts iff the current index of the run is in the set .For simpler notation, we define offsets in Definition C.2 (Inductive aligned (Δ, T )-equivalence).Given a set of traces T , a multitrace formula with free fixpoint variables 1 , . . ., and unique successor assignment Δ as well as an automaton A with states including 1 , . . ., , we call A inductively aligned (Δ, T )-equivalent to , iff for all trace assignments Π over T binding the free trace variables of , fixpoint variable assignments W and indices ∈ N 0 with ≤ wapref (Π, Δ), we have prog (Π, Δ, ) ∈ Π W iff A [ 1 : W ( 1 ), . . ., : W ( )]) has an accepting ( 0 , ind ( Δ Π , ))-run on Δ Π for some initial state 0 of A.

P
T 5.5.The automaton A is given by the construction described in Section 5.2 (resp.Appendix C.2) and has linear size in | | where the size of the transition function is measured by the number of distinct subformulae in analogy to the size of mumbling formulae.We intend to show that A is inductively aligned (Δ, T )-equivalent to for all sets of traces T .
For this, let T be an arbitrary set of traces, Π be a trace assignment over T and ≤ wapref (Π, Δ).We discriminate two cases based on the form of Δ Π and focus on the harder one, i.e.where Δ Π has a suffix of ⊤-symbols.The other case is completely analogous to the proof of Claim 3 in the proof of Lemma 4.2.
We focus on a finite succession of ( 1 , . . ., ) symbols followed by an infinite suffix of ⊤ symbols where for ≤ wapref (Π, Δ), ind ( Δ Π , ) = .The other cases follow from the fact that the semantics of is invariant under the well-aligned addition and removal of call and ret moves in Π and the fact that these symbols are skipped in the automaton.
As a first step, we show the claim for formulae that do not contain fixpoints or fixpoint variables.This can be done by a structural induction on the form of .For atomic formulae [ap] and ¬[ap] as well as connectives ′ ∨ ′′ and ′ ∧ ′′ this is straightforward.In the case for next formulae Δ ′ , we discriminate two cases: < wapref (Π, Δ) and = wapref (Π, Δ).For the first of these two cases, the claim follows directly from the induction hypothesis since we have already shown the inductive equivalence for ′ and index + 1.For the second case, we have prog (Π, Δ, ) ∉ Π since we have reached the end of the Δ-well-aligned prefix of Π.Also, A does not have an accepting ( 0 , )-run: the automaton moves to ( ′ , false) with the first symbol ( 1 , . . ., ) of Δ Π [ ] and then moves to false with the second symbol ⊤ of Δ Π [ ]. From there, all runs are rejecting.For dual next formulae Δ ′ , the proof is analogous to the previous case with the difference that we move to true when a ⊤ symbol is encountered.This concludes the proof for fixpoint-free formulae .Now, we show the claim for general formulae with fixpoints using the fact that we have already shown it for fixpoint-free formulae.In Lemma C.1, we have seen that prog (Π, Δ, ) ∈ Π W iff prog (Π, Δ, ) ∈ wapref (Π,Δ) − Π W for all ≤ wapref (Π, Δ) where wapref (Π,Δ) − is a formula without fixpoints.Since we have already shown the claim for such formulae, we know that A wapref (Π,Δ) − is inductively aligned (Δ, T )-equivalent to wapref (Π,Δ) − .We thus know for all ≤ wapref (Π, Δ) : W ( )] has an accepting ( 0 , )-run on Δ Π .We argue that A [ 1 : W ( 1 ), . . ., : W ( )] has an accepting ( 0 , )-run on Δ Π for an initial state 0 iff A wapref (Π,Δ) − [ 1 : W ( 1 ), . . ., : W ( )] has an accepting ( 0 , )-run on Δ Π for an initial state 0 in order to show our original claim.For this, we transform an accepting run of A [ 1 : W ( 1 ), . . ., : W ( )] into an accepting run of A wapref (Π,Δ) − [ 1 : W ( 1 ), . . ., : W ( )].Since our run is accepting, it has to end in loops on states true after a finite amount of steps since otherwise it would either move to false from a state [ap] ( ¬[ap] ) or read a symbol ⊤ in a state ′ for some subformula ′ of which is not a dual next formula and then move to false.Similarly, if a symbol ⊤ is read in a state ′ for a dual next formula ′ , we end in a true loop as well.wapref (Π,Δ) − is obtained from by unrolling fixpoints ′ (or .′ ) wapref (Π, Δ) − times and then replacing Δ and Δ operators that are nested more than wapref (Π, Δ) − times by false and true, respectively.This makes A wapref (Π,Δ) − structurally very similar to A .Thus, we can build a run in A wapref (Π,Δ) − that is structurally very similar to the run in A but visits the state ′ (or rather a version of this state for some unrolling of ′ ) instead of the state during the exploration of the fixpoint.Since the acceptance of every branch in our run was induced by the loops on true, the new run is still accepting despite this change in priorities.With similar arguments, an accepting run of A wapref (Π,Δ) − [ 1 : W ( 1 ), . . ., : W ( )] can be transformed into an accepting run of A [ 1 : W ( 1 ), . . ., : W ( )].This concludes our proof.
C.3 Detailed Construction of (P D ap , ap ) from Subsection 5.2 For ap = Δ( ), we transform (PD, ) with PD = ( , 0 , , ) into a fair pushdown system (PD ap , ap ) that is suitable for a projection construction with A −1 .Here, this process is more involved than the corresponding construction for a Kripke structure (K ap , ap ) from Section 4.2 (resp.Appendix B.4), however.In particular, we are not only faced with the challenge of different behaviour of the mumbling operator in prefixes where the mumbling criterion ap holds and suffixes where it does not hold, which was already present in the construction of (K ap , ap ).Instead, we also have to deal with the peculiarities of the Δ-well-aligned encoding of a trace assignment in which one mumbling step is not matched by one, but possibly multiple steps in the encoding.Towards the first challenge, we proceed as in the construction of (K ap , ap ): We divide the state space of our structure into a part where states labelled ap are visited and intermediate states not labelled ap are skipped as well as a part where states labelled ap cannot be visited any more and where intermediate states are not skipped.Towards the second challenge, we make sure that (i) one int-step can be made in the structure corresponding to P-symbols in the encoding of a trace and (ii) ret-and call-steps are made corresponding to the ret-call-profile of the currently progressed subtrace.The construction proceeds in two steps.We first construct an intermediate structure in which we divide the state space for the first challenge and add int-steps for part (i) of the second challenge.In a second step we then construct the final structure out of the intermediate structure while addressing part (ii) of the second challenge as well.
For this construction, we assume that initial states in PD are isolated, i.e. that there are no transitions ( , 0 ), ( , 0 , ) or ( , , 0 ) for all 0 ∈ 0 .This can be achieved by creating copies of the initial states with no incoming transitions as new initial states without changing the set of traces of the PDS.For the intermediate structure, let = 0 ∪ ℓ ∪ ℓ where ap ∈ ( ) for all ∈ ℓ and ap ∉ ( ) for all ∈ ℓ be the partition of into initial states ( 0 ), non-initial states labelled ap ( ℓ ) and non-initial states not labelled ap ( ℓ ).Intuitively, due to our assumption on the isolation of initial states, states in the first two sets are the ones that are visited while progressing with the mumbling criterion ap whereas the third set contains the states that are only visited in suffixes not seeing ap any more.The states and labelling of the intermediate system PD ′ = ( ′ , ′ 0 , ′ , ′ ) are given as follows: The set of target states ′ is given by { Before we formally define the transition relation, let us explain the intuition for creating multiple copies of certain states.In this structure, we sort the states into different categories based on what phase they are visited in: states ( , ), ( , ) and ( , pre) are visited in the prefix where ap-labelled states are visited and states ( , suf ) ( , sufpend) are visited in the suffix where ap-labelled states cannot be visited anymore.Additionally notice that we split certain states into two copies ( , ) and ( , ) or ( , suf ) and ( , sufpend).We do this to make sure that every step made by the ap-mumbling can be matched by exactly one int-step in between these copies.In the prefix, we always first visit the left copy ( , ), then make an int-step to the right copy ( , ) and then proceed from there, thus always adding an int-step.In the suffix, we only add an int-step if the current transition is a callor ret-transition is taken.These transitions lead to a pending state ( , sufpend) from where the added int-transition leads to ( , suf ).
We now proceed with the definition of the transition relation ′ .Let subs be the set where states ∈ 0 ∪ ℓ are substituted by ( , ) if they occur on the right side of a transition and substituted by ( , ) if they are on the left side of a transition.States ∈ ℓ are substituted by ( , pre) in this set.Formally, the internal transitions of subs are given by Call-and return-transitions are defined analogously.Additionally, let suf be the set obtained from in the following way: on the left side, we substitute states ∉ ℓ with ( , ) and for states ∈ ℓ , we substitute with ( , suf ); on the right side, we substitute states with ( , sufpend) for call or ret transitions and with ( , suf ) for int transitions.Furthermore, we have int transitions from ( , sufpend) to ( , suf ) in suf .Formally, the internal transitions of suf are given by As mentioned, call and return transitions are defined slightly differently in this case.The return transitions of suf are given by Call transitions are defined analogously.Finally, let ℓ be the set {(( , ), ( , )) | ∈ 0 ∪ ℓ }.We define: Intuitively, these transition sets can be understood as follows.The sets ℓ and {(( , sufpend), ( , suf )) | ∈ ℓ } correspond to the additional int-steps discussed before.In subs , we substitute ( , ) for transitions where is on the right and ( , ) for transitions where is on the left to make sure that the internal transition from ( , ) to ( , ) is taken exactly once whenever is visied.In suf , we only take an additional internal move when a call-or ret-transition is taken.This is done by moving to states ( , sufpend) with these transitions from where the only possible transition is (( , sufpend), ( , suf )).Finally, the set {(( , ), ( ′ , suf )) | ∉ ℓ , ( , ′ ) ∈ } contains the transitions in suf making the switch from the prefix to the suffix.
Using this intermediate structure, we now construct PD ap .Notice that in the suffix only visiting states not labelled ap, where transitions from suf in PD ′ are taken, these transitions already directly correspond to the ret-call-profile of mumbling steps.As each mumbling transition moves exactly one step in this case, the ret-call-profile of a step can be (1, 0), (0, 1) or (0, 0) depending on whether a ret, a call or an int-transition is taken during this single step.The corresponding encoding is a ret-followed by an int-step, a call-followed by an int-step or only an int-step, respectively.Thus, the transitions in suf exactly match the correct encoding.
We now compute transitions corresponding to (abs * ret) or (abs * call) steps in abstract summarisations.Due to the above observations, we only have to do further calculations in the prefix.We calculate abstract successors in PD ′ with respect to subs .This makes sure that (i) the int-steps from ℓ which were not present in the original structure do not count towards these abstract successors and (ii) that states labelled ap cannot be skipped by a calculated abstract successor.During the calculation, we distinguish whether a target state ∈ ′ is visited on the way or not and write → abs ′ for abstract successors not visiting a target state and → abs, ′ for abstract successors visiting target states.Let → * abs be the reflexive and transitive closure of → abs and → * abs, be the relation (→ abs ∪ → abs, ) * → abs, (→ abs ∪ → abs, ) * where (→ abs ∪ → abs, ) * is the reflexive and transitive closure of → abs ∪ → abs, .
For the definition of the structure, the state space of PD ′ has to be supplemented slightly.We have to make sure that (i) ret-and call-transitions are taken in the right order and (ii) target states visited on a trace but skipped via abstract successors in its encoding are made visible.For this, we introduce two bits: one bit indicating whether a ret-transition can be taken and one bit indicating whether a target state was recently visited.We define the states and labelling of PD ap = ( ap , 0,ap , ap , ap ) as This structure generates us the well-aligned encodings wa tr of traces tr from Traces(PD, ).For this, we have to look at the state labelling whenever we do an internal step, since these steps are made exactly at those points that are inspected in an ap-mumbling.In between those states, ret and call moves can be made in accordance with (abs * ret) and (abs * call) successions from the ret-call profile of the current finite subtrace.The second component of the state space makes sure that retand call-transitions are taken in the right order.ret-trantisions are only possible in copy which is left upon taking a call-transition and only reentered when taking an int-transition.
C.4 Proof of Theorem 5.6 P .The part of the claim about the size of A can be seen by inspecting the construction.For the inner formula , we know that |A | is linear in | | for the APA A from Theorem 5.5.An alternation removal construction to transform it into an NBA increases the size to exponential in | |.Complementation constructions are performed using Proposition 2.2 for each every quantifier alternation, each further increasing the size exponentially.Finally, the size measured in |PD| is one exponent smaller since the structure is first introduced into the automaton after the first alternation removal construction.
For the second part of the proof, let T = Traces(PD, ).We use the notation = . . . 1 1 .with special cases 0 = and = , and show that A is aligned (Δ, T )-equivalent to by induction on .The base case follows immediately from Theorem 5.5.
In the inductive step, we assume that A −1 is aligned (Δ, T )-equivalent to −1 and show the claim for .There are two cases, = ∃ and = ∀.The more interesting case is the former, where = ∃ .−1 .Let Π be a trace assignment over T binding the free trace variables in .We show both directions of the required equivalence individually.
In the first case, we have wapref (Π, Δ) = wapref (Π ′ , Δ) = ∞.Then, both wa Π and wa Π ′ do not contain ⊤-symbols and each ( 1 , . . ., − ) symbol in Π is extended by a set of atomic propositions from the corresponding position in tr to obtain ( 1 , . . ., − , ).The run 0 1 . . . is constructed from ′ 0 ′ 1 . . .and tr in the same way as in the proof of Theorem 4.9 and stays in copy wa all the time.Its acceptance can be inferred from the acceptance of ′ 0 ′ 1 . . .and fairness condition of tr with the same argument as used in the proof of Theorem 4.9.The component simulating the multi-automaton A P D does not matter in this case since we never transition to states ⊤ .We know, however, that a transition can always be taken in this component since A P D is reverse-total.
In the second case, we have wapref (Π ′ , Δ) ≠ ∞ and wapref (Π, Δ) > wapref (Π ′ , Δ).Then wa Π ′ consists of ⊤-symbols after the first wapref (Π ′ , Δ) +1 P-symbols whereas in wa Π , ⊤-symbols start later (if at all).This means that the non-well-alignedness of Π ′ in Δ-step wapref (Π ′ , Δ) + 1 is not due to the non-well-alignedness of the traces in Π, but instead due to the fact that the traces in Π are not well-aligned with tr in this step.In particular, the well-aligned encoding of Π makes a call somewhere in this Δ-step while the well-aligned encoding of tr makes an int-or ret-step (or any other combination of mismatching steps).We construct the run 0 1 . . .as follows.Up until Δ-step wapref (Π ′ , Δ), we construct it in the same way as in the first case, i.e. we stay in copy wa and simulate A −1 on wa Π ′ by taking the traces in Π from the input and constructing tr on the fly in the ap component of the automaton.Then, we move to the ua copy and keep the simulation until we are at the point where the well-aligned encoding of Π and tr make different kinds of steps.In the component representing A P D , we can choose an accepting reverse-run that ends in the last state of the prefix of tr at this point, which is possible since we know that the prefix of tr up until this point has a fair continuation, namely tr.Thus, it is possible to move to states ⊤ at this point where the run will remain indefinitely.At the same time as moving to ua, we move the component representing A −1 to a state ⊤ from where A −1 is simulated on ⊤ .Since wa Π ′ has a ⊤ suffix from Δ-step wapref (Π ′ , Δ) + 1 onwards and ′ 0 ′ 1 . . . is an accepting run, this leads to an accepting run in A as well.
On the other hand, assume that wa Π ∈ L (A ).We thus have an accepting run 0 1 . . . of A on wa Π .We discriminate two cases based on wapref (Π, Δ).In the first case, where wapref (Π, Δ) = ∞, the run stays in copy wa of A all the time.From the ap component of this run, we can extract the well-aligned encoding of a fair trace tr that is well-aligned with Π. From the −1 component, we also know that A −1 has an accepting run on wa Π ′ where Π ′ denotes the trace assignment Π[ ↦ → tr].We use the induction hypothesis to obtain that Π ′ | = T −1 and have thus found a witness for Π | = T ∃ .−1 .
In the second case, we have wapref (Π, Δ) ≠ ∞ and the run moves to states ⊤ at some point: either (a) in Δ-step wapref (Π, Δ) +1 due to reading a ⊤-symbol from the wa copy of the automaton, or (b) due to visiting the copy ua and then ending up there in a Δ-step before that.Before this point, we can extract a prefix of a trace tr from the −1 and ap components of the automaton in the same way as in the first case of this direction of the proof.This prefix is then extended into a fair trace tr.In particular, this is possible since an accepting run can only end up in states ⊤ when there is an accepting run of the multi-automaton A P D on the last configuration before this transition.Let Π ′ denote the trace assignment Π[ ↦ → tr].We know that wapref (Π ′ , Δ) ≤ wapref (Π, Δ).If our run 0 1 . . .has the form (a), we know that wapref (Π ′ , Δ) = wapref (Π, Δ) since the run can only stay in copy wa of the automaton as long as tr and Π are well-aligned.If the run has the form (b) instead, we know that wapref (Π ′ , Δ) < wapref (Π, Δ) since we have identified the non-wellalignedness of tr and Π before Δ-step wapref (Π, Δ) in this case.In both cases, however, we have simulated A −1 on the correct encoding wa Π ′ and checked that it has an accepting run.We can thus again use the induction hypothesis to obtain that Π ′ | = T −1 and have found a witness for Π | = T ∃ .−1 .
The case = ∀ uses the fact that Π | = T ∀ .−1 iff Π | = T ∃ .¬−1 (where the semantics of ¬ −1 is interpreted as usual), Proposition 2.2 and the same arguments as in the previous case.
C.5 Proof of Theorem 5.7 For the proof of Theorem 5.7, we again formulate additional theorems for upper and lower bounds as in the proof of Theorem 4.10.

T
C.3.Fair model checking a mumbling hyperproperty formula with basis AP, unique mumbling and well-aligned successor operators against a fair pushdown system (PD, ) is decidable in ( + 1)EXPTIME where is the alternation depth of the quantifier prefix.For fixed formulae, it can be decided in EXPTIME.

P
. Theorem 5.6 gives us a VPA A of size ( + 1, | | + log(|PD|)) that is aligned (Δ, Traces(PD, ))-equivalent to for formulae with an outermost existential quantifier.For an outermost universal quantifier, we take the automaton A ¬ instead.By Proposition 2.2, the intersection of A (resp.A ¬ ) and the automaton recognising encodings of {} (that has constant size) can be tested for emptiness in time polynomial in the size of the automaton for an answer to the model checking problem.

T
C.4.Fair model checking a mumbling hyperproperty formula with unique mumbling and well-aligned successor operators against a fair pushdown system is decidable in ( +1)EXPTIME where is the alternation depth of the quantifier prefix.For fixed formulae, it can be decided in EXPTIME.

P
. Follows directly from Theorem C.3 and Lemma 4.3 with the same arguments as presented in the proof of Theorem B.2.

T
C.5.The fair pushdown model checking problem for a mumbling hyperproperty formula with unique mumbling and well-aligned successors and fair Pushdown System (PD, ) is hard for EXPSPACE where ≥ 1 is the alternation-depth of the quantifier prefix of .For fixed formulae and ≥ 1, it is ( − 1)EXPSPACE-hard.For = 0, it is hard for EXPTIME.

P
. The case for > 0 is an immediate corollary from Theorem B.3 and the fact that fair pushdown model checking subsumes fair finite state model checking.The case for = 0 is by a reduction from the LTL model checking problem against pushdown systems known to be EXPTIMEhard [Bouajjani et al. 1997].
With the help of these, we again obtain a simple proof: .The main idea of the reduction is to translate the stuttering formula with stuttering assignments Γ into a mumbling formula ′ with successor assignments Δ in which each formula Δ( ) expresses that the valuation of some formula ∈ Γ( ) changes from this point on the trace to the next.Then, all next operators Γ are replaced with the corresponding next operator Δ .More concretely, Δ( ) is given as Γ, := ∈Γ ( ) ¬( ↔ g ).Then, Δ always advances the traces to the points directly before the points that Γ would advance them to.To compensate for this effect, all tests [ ] are replaced with [ g ] .In order to ensure that we are also directly in front of the tested positions initially, we extend the system (PD, ) by a fresh initial state that transitions to the old initial state, obtaining (PD ′ , ′ ).It is easy to see that (PD, ) | = iff (PD ′ , ′ ) | = ′ .It is also easy to see that if (PD, ) is a Kripke structure, then (PD ′ , ′ ) is a Kripke structure as well.
D.2 Proof of Lemma 6.2 P .The main idea is to extend the translation of the formula from the proof of Theorem 6.1.If only one stuttering assignment is used in = . . . 1 1 .Γ , the problem with the first position can be addressed directly in the formula ′ without changing the structure.First of all, fixpoints in Γ are unrolled once such that every test and every next operator Δ either applies to the initial position only or just to non-initial positions.Then, tests to the initial position are not shifted like the other tests.We call the unquantified formula obtained from Γ so far Δ .
A subtle problem arises, if Γ advances a trace onto the second position of that trace: In this case, Δ operators on the initial position move too far.If we know the set ⊆ { 1 , ..., } of traces this problem applies to, we can solve this problem by removing the Δ operators on the initial position (which is possible since we unrolled fixpoints) and shifting tests on traces ∉ by one Γ-position by replacing [ g ] with [ g ((¬ Γ, ) U g ( Γ, ∧ g ))] .For a specific set , we use Δ for the formula where the replacements are done in accordance to .In the final formula, we identify the correct problematic trace set by testing for Γ, on the first position of each trace.Our final translation of Γ is then given by . . . 1 1 .
D.3 Proof of Lemma 6.3 Here, we prove the claims used in the proof of Lemma 6.3 that were not proved directly.First, we have a detailed version of the first direction of the proof.
D.4 Proof of Lemma 6.5 In the proof of Lemma 6.5 in the main body of the paper, the proof of Claim 2 was missing.We present this part of the proof here.
We formalise the intuitions presented in the proof in the main body of the paper in additional claims, which we show separately.For these claims, we classify positions on traces bound by Π into three categories.For a trace variable with Π( ) = tr, Δ( ) = and Γ( ) = (as defined in the proof) and a position , we say that is a position of Type a), b) or c) on based on the following conditions: • Type a): There are only finitely many such that ∈ tr .• Type b): There are infinitely many such that ∈ tr and there are ∈ {1, . . ., }, > and > such that ∈ ∧ tr and ∈ ∧ ¬ tr .
• Type c): Neither of the previous conditions applies.This is equivalent to the condition that there are infinitely many with ∈ tr and for all ∈ {1, . . ., } and , ′ > with ∈ tr and ′ ∈ tr , ∈ tr iff ′ ∈ tr .

P
. If is a position of type a) on , we distinguish two cases based on how many positions > with ∈ tr there are.If there are no such positions, then clearly succ (tr, ) = + 1.Since the valuation of 0 is false on all positions after , we have succ (tr, ) = + 1 as well in this case.tr }.We choose and as the minimal positions greater than such that ∈ ∧ tr and ∈ ∧ ¬ tr .Since = is impossible, we distinguish two cases, < and > .We start with the case < where there is a positive number of ∧ positions between and .If ∈ tr , then this number is even.For all < ′ < min{ > | ∈ tr }, the number of ∧ positions between ′ and is even as well since ′ ∉ tr for all such ′ .Thus ′ ∈ tr .On the other hand, the number of ∧ positions between min{ > | ∈ tr } and is odd since min{ > | ∈ tr } ∈ ∧ tr .Thus min{ > | ∈ tr } ∉ tr .Analogously, if ∉ tr , then ∉ tr for all < < min{ > | ∈ tr } and min{ > | ∈ tr } ∈ tr .We conclude succ (tr, ) = min{ > | ∈ tr }.The other case, > , is analogous to the case < with the roles of and ˜ switched.This concludes the proof of Claim 5. C 6. Let be a path variable with Π( ) = tr and Γ( ) = as well as be a position of type c) on .Then succ (tr, ) = + 1.

P
. The valuation of 0 is false on all positions of a trace with positions of type c).We show that the valuation of and ˜ is also constant for all ∈ {1, . . ., } and positions ≥ .Fix an arbitrary ∈ {1, . . ., }.We distinguish two cases based on whether all positions ′ > with ′ ∈ tr satisfy or all positions ′ > with ′ ∈ tr satisfy ¬ .Consider the case where for all ′ > with ′ ∈ tr , we have ′ ∈ tr .We argue that ∉ tr and ∈ ˜ tr for all ≥ .For , this is due to the fact that the base case of the fixpoint formula is not satisfied for We show by transfinite induction, that for all ordinals ≥ 0, (i) ˆ (∅) is invariant under type c) substitutions and (ii) (∅) and ˆ (∅) are equivalent on .In this induction's base case, 0 (∅) and ˆ 0 (∅) are both empty and thus satisfy both claims.
In the case for successors, the induction hypothesis from the transfinite induction yields that (1) ˆ (∅) is invariant under type c) substitutions and (2) (∅) and ˆ (∅) are equivalent on .

T 4. 1 .
The finite state model checking problem for mumbling is undecidable.

L 4. 3 .
Let be a mumbling hyperproperty formula with full basis and (PD, ) be a fair PDS.There is an extended set of atomic propositions AP ′ ⊇ AP such that one can construct a mumbling formula ′ of size O(| |) with basis AP ′ and a fair PDS (PD ′ , ′ ) of size O(|PD | • 2 ( | | ) ) for a polynomial such that (PD, ) | = iff (PD ′ , ′ ) | = ′ .Moreover, and ′ have the same number of successor assignments.If PD is a Kripke structure, then PD ′ is also a Kripke structure.
2 and 6.1 from[Gutsfeld et al. 2021]: Let be a quantifier-free closed synchronous formula.There is an APA A of size linear in | | that is T -equivalent to for all sets of traces T . 2 Together, Lemma 4.6 and Theorem 4.7 give us: T 4.8.For any closed multitrace formula with unique successor assignment Δ and basis AP, there is an APA A with size linear in | | that is (Δ, T )-equivalent to for all sets of traces T .

Fig. 4 .
Fig. 4. Valuation of 0 on trace with finitely many -positions.The numbers in the line labeled # indicate the number of -positions a er the current position.

Fig. 5 .
Fig. 5. Valuation of and ˜ on trace with infinitely many -positions.The numbers in the line labeled # indicate the number of relevant positions for each position as described in (1) and (2).A question mark indicates that the valuation depends on the continuation of the trace.
Follows directly from Theorem C.4 and Theorem C.5.D APPENDIX TO SECTION 6 D.1 Proof of Theorem 6.1 P .Let (PD, ) and be the inputs for the fair pushdown model checking problem for stuttering If there are such positions, then succ (tr, ) = min{ > | ∈ tr } by definition.We argue that succ (tr, ) = min{ > | ∈ tr } as well in this case: If ∈ 0 tr , then there are an odd number of -positions after position .For all < < min{ > | ∈ tr }, there are also an odd number of -positions after , thus ∈ 0 tr as well.For min{ > | ∈ tr } on the other hand, there are an even number of -positions and thus min{ > | ∈ tr } ∉ 0 tr .Analogously, if ∉ 0 tr then ∉ 0 tr for all < < min{ > | ∈ tr } and min{ > | ∈ tr } ∈ 0 tr .Thus, succ (tr, ) = min{ > | ∈ tr }.If is a position of type b) on , we again have succ (tr, ) = min{ > | ∈