Reachability in Continuous Pushdown VASS

Pushdown Vector Addition Systems with States (PVASS) consist of finitely many control states, a pushdown stack, and a set of counters that can be incremented and decremented, but not tested for zero. Whether the reachability problem is decidable for PVASS is a long-standing open problem. We consider continuous PVASS, which are PVASS with a continuous semantics. This means, the counter values are rational numbers and whenever a vector is added to the current counter values, this vector is first scaled with an arbitrarily chosen rational factor between zero and one. We show that reachability in continuous PVASS is NEXPTIME-complete. Our result is unusually robust: Reachability can be decided in NEXPTIME even if all numbers are specified in binary. On the other hand, NEXPTIME-hardness already holds for coverability, in fixed dimension, for bounded stack, and even if all numbers are specified in unary.


INTRODUCTION
Pushdown VASS (PVASS) is a model of computation which combines a Pushdown Automaton (PDA) and a Vector Addition System with States (VASS) by using both a stack and counters.Since PDAs naturally model sequential computation with recursion [Alur et al. 2005;Reps et al. 1995] and VASSs naturally model concurrency [Karp and Miller 1969], the combination of the two is an expressive modelling paradigm.For instance, PVASS can be used to model recursive programs with unbounded data domains [Atig and Ganty 2011], beyond the capability of PDA alone.They can also model context-bounded analysis of multi-threaded programs [Atig et al. 2009], even when one thread can have arbitrarily many context switches.In program analysis, one-dimensional PVASS models certain pointer analysis problems [Kjelstrøm and Pavlogiannis 2022;Li et al. 2021].
Further, if we allow the configurations to be encoded in binary, then hardness already holds for coverability in 13-dimensional Q + -PVASS.
Our result is in stark contrast to reachability problems in classical VASS: It is well-known that there, coverability is EXPSPACE-complete [Rackoff 1978], whereas general reachability is Ackermann-complete [Czerwiński and Orlikowski 2022;Leroux 2022].Furthermore, fixing the dimension brings the complexity down to primitive-recursive [Leroux and Schmitz 2019] (or from EXPSPACE to PSPACE in the case of coverability [Rosier and Yen 1986]).
Another surprising aspect is that for continuous PVASS, the coverability problem and the state reachability problem do not have the same complexity.We also show: Theorem 1.3.The state reachability problem for Q + -PVASS is NP-complete.This is also in contrast to the situation in PVASS: There is a simple reduction from the reachability problem to the coverability problem [Leroux et al. 2015], and from there to state reachability.Thus, the three problems are polynomial-time inter-reducible for PVASS.
Our results are based on a number of novel technical ingredients.Especially for our lower bounds, we show a number of subtle constructions that enable us to encode discrete computations of bounded runs of counter machines in the continuous semantics.Ingredient I: Upper bound via rational arithmetic We prove the NEXPTIME upper bound by observing that a characterization of runs in a cyclic Q + -VASS (meaning: the initial state is also the only final one) by Blondin and Haase [2017] still holds in a more general setting of cyclic Q + -PVASS.We apply this observation by combining several (known) techniques.As is standard in the analysis of PVASS [Englert et al. 2021;Leroux et al. 2015], we view runs as derivations in a suitable grammar.As usual, one can then decompose each derivation tree into an acyclic part and "pump derivations" of the form  * = ⇒  for some non-terminal .Such pumps, in turn, can by simulated by a cyclic Q + -PVASS.Here, to simulate  * = ⇒ , one applies  as is and one applies  in reverse on a separate set of counters.This idea of simulating grammar derivations by applying "the left part forward" and "the right part backward" is a recurring theme in the literature on context-free grammars (see, e.g.[Baumann et al. 2023;Berstel 1979;Lohrey et al. 2022;Reps et al. 2016;Rosenberg 1967]) and has been applied to PVASS by Leroux et al. [2015, Section 5].
As a consequence, reachability can be decided by guessing an exponential-size formula of Existential Linear Rational Arithmetic (ELRA).Since satisfiability for ELRA is in NP, this yields an NEXPTIME upper bound.
Ingredient II: High precision and zero tests For our lower bound, we reduce from reachability in machines with two counters, which can only be doubled and incremented.A run in these machines is accepted if it is of a particular length (given in binary) and has both counters equal in the end.We call such machines 2CM •2,+1 RL , the RL stands for "run length".This problem is NEXPTIME-hard by a reduction from a variant of the Post Correspondence Problem where the word pair is restricted to have a specified length (given in binary) [Aiswarya et al. 2022].
We give the desired reduction from 2CM •2,+1 RL , through a series of intricate constructions that "control" the fractional firings.We go through an intermediate machine model called [0, 1]-VASS 0?
RL is just like a Q + -VASS, except that the counter values are constrained to be in the interval [0, 1] and we allow zero tests.Further, we only consider runs up to a particular length (given in binary), as indicated by the RL subscript.When reducing from 2CM •2,+1 RL , we are confronted with two challenges: First, [0, 1]-VASS 0?
RL cannot store numbers beyond 1 and second, [0, 1]-VASS 0? RL cannot natively double numbers.The key idea here is that, since we only consider runs of a 2CM instead of storing the counter values of a 2CM •2,+1 RL exactly, we instead use exponential precision.We encode a number  ∈ N by  2  ∈ [0, 1].Since then all the values are in [0, 1], we can double the counter values in a [0, 1]-VASS 0?
RL by forcing the firing fraction of the [0, 1]-VASS 0? RL to be a particular value.The firing fraction is controlled, in turn, by means of the zero tests.
RL , we need to be able to add 1 2  to a counter.To this end, we present a [0, 1]-VASS 0?
RL gadget of polynomial size that produces the (exponentially precise) number 1 2  in a given counter.The idea is to start with 1 in the counter and halve it  times.The trick is to use an additional counter that goes up by 1/ for each halving step.Checking this counter to be 1 in the end ensures that exactly  halvings have been performed.
RL to Q + -VASS RL , which are Q + -VASS with a run-length constraint.Here, we need to (i) make sure that the counter values remain in [0, 1] and (ii) simulate zero tests.We achieve both by introducing a complement counter x for each counter , where it is guaranteed that  + x = 1 at all times.This means that instead of checking  = 0, we can check x = 1 by subtracting 1 from x.However, this does not suffice-we need to ensure that the firing fraction is exactly 1 in these steps.Here, the key idea is, whenever we check x = 1, we also increment at the same time (and thus with the same firing fraction), a separate counter called the controlling counter, which in the end must equal  , the total number of zero tests.This exploits the fact that every run attempts the same pre-determined number of zero tests due to the run-length constraint.If the controlling counter reaches the value  at the very end, then we are assured that every zero-test along the run was indeed performed correctly.
Finally, we reduce from Q + -VASS RL to Q + -PVASS by using the pushdown stack to count from zero up to a number specified in binary.This employs a standard trick for encoding a binary number on the stack, where the least significant bit is on top.We further show that the final Q + -PVASS that we construct has bounded stack-height, 13 counters, and also that the target configuration can be reached from the source configuration if and only if the target can be covered from the source.This proves that coverability is hard even for a constant number of counters.
Ingredient V: Unary encodings The above reduction produces instances of Q + -PVASS where the configurations are encoded in binary.Proving hardness for unary encodings requires more ideas.First, by using a trick akin to exponential precision from above, we show that hardness of coverability in Q + -PVASS holds already when all the values of the given configurations are less than 1.Next, by reusing the doubling and the halving gadgets from Ingredients II and III, we show that for any fraction /2  where  is given in binary, there exists an amplifier, i.e., there is a Q + -VASS of polynomial size in log() and , which starting from an unary-encoded configuration is able to put the value /2  in a given counter.We then simply plug in a collection of these amplifiers before and after our original Q + -PVASS to get the desired result.

Related work.
There have been several attempts to study restrictions or relaxations of the PVASS reachability problem.For example, reachability is decidable when one is allowed to simultaneously test the first  counters of a VASS for zero for any  [Reinhardt 2008]; this model can be seen as a special case of PVASS [Atig and Ganty 2011].Furthermore, the coverability problem in one-dimensional PVASS is decidable [Leroux et al. 2015] and PSPACE-hard [Englert et al. 2021].Reachability is decidable for bidirected PVASS [Ganardi et al. 2022], although the best known upper bound is Ackermann time (primitive recursive time in fixed dimension).Our work is in the same spirit.The continuous semantics reduces the complexity of reachability from Ackermann-complete for VASS to NP-complete [Blondin and Haase 2017] (and even to P for Petri nets [Fraca and Haddad , Vol.

2015]
).Our results show that the presence of a stack retains decidability, but allows exponentially more computational power.
All missing proofs can be found in the appendix of this paper.

PRELIMINARIES
We write Q for the set of rationals and Q + for the set of nonnegative rationals.Vectors over Q (or Q + ) are written in bold (u, v etc.) and are represented as a pair of natural numbers (numerator and denominator) for each rational.Throughout this paper, all numbers will be encoded in binary, unless stated otherwise.Note that, this means that, each rational number is a pair of natural numbers, with both of them encoded in binary.Machine models.A -dimensional Continuous Vector Addition System with States (-Q + -VASS or simply Q + -VASS) M = (, , Δ) consists of a finite set  of states, a finite set  ⊆ Z  of transitions, and a finite set Δ ⊆  ×  ×  of rules.We will on occasion consider an infinite Q + -VASS where  continues to be finite, but  and Δ are infinite.
A configuration of M is a tuple  = (, v) where  is a state and v ∈ Q  + is a valuation of the counters.We use the notations state(), val(),  () to denote , v, v() respectively.Let  = (, ,  ′ ) ∈ Δ be a rule and let  ∈ (0, 1] be the firing fraction.A step from a configuration  to another configuration  ′ by means of the pair (,  ) (denoted by We assume the reader is familiar with context-free grammars and give basic definitions and notation (see, e.g., [Sipser 2012]).A context-free grammar G = (,  , Σ, ) consists of a finite set of nonterminals  , a starting nonterminal , a finite alphabet Σ and a finite set of production rules  ⊆  × ( ∪ Σ) * .We will assume that G is in Chomsky Normal Form.As usual * = ⇒ is the reflexive, transitive closure of = ⇒.A word  ∈ Σ * belongs to the language (G) of the grammar iff  * = ⇒.
A Continuous Pushdown VASS (Q + -PVASS) is a Q + -VASS additionally equipped with a stack.Formally, it is a tuple M = (, Γ, , Δ) where  is a finite set of states, Γ is a finite stack alphabet,  ⊆ Z  × (Γ ∪ Γ ∪ ) is a finite set of transitions, and Δ ⊆  ×  ×  is a finite set of rules.A configuration  = (, , v) of M contains additionally the stack  ∈ Γ * and we write  = stack().

A step 𝐶
+ , and one of the following holds: (1)  ∈ Γ and stack( ′ ) =  stack(), (2)  ∈ Γ and stack() =  stack( ′ ), or (3)  =  and stack() = stack( ′ ).The notion of run, firing sequence, and reachability is defined as for Q + -VASS.In some cases, we will need to extend the notion of step to allow vectors in Q  in a configuration rather than just Q  + .We then explicitly specify this in the form of an underscript: − Decision Problems.The reachability problem for Q + -PVASS is defined as follows: Given a Q + -PVASS M and two of its configurations  0 ,  1 , is  1 reachable from  0 ?The coverability problem for Q + -PVASS is defined as follows: Given a Q + -PVASS M and two of its configurations  0 ,  1 = ( 1 ,  1 , v 1 ), does there exist a configuration The state reachability problem is defined as follows: Given a Q + -PVASS M, a configuration  0 and a state , does there exist a configuration  1 with state( 1 ) =  that is reachable from  0 ?Example 2.1.Let us consider the Q + -PVASS from Figure 1, which we shall denote by M. It has 2 counters and stack symbols  and .Recall that a label  represents a push of  and ā represents a pop of .There are only two outgoing rules from the state  0 : the first rule  1 decrements the first counter by 1, does not modify the second counter and pushes  onto the stack and the second rule  2 decrements the second counter by 1, does not modify the first counter and pushes  onto the stack.Hence, starting from the configuration ( 0 , , (0, 0)) it is not possible to reach a configuration whose state is  1 .This implies that the input (M, ( 0 , , (0, 0))),  1 ) is a negative instance of the state reachability problem.
However, the input (M, ( 0 , , (1, 1)), ( 1 , , (1, 1))) is a negative instance of the coverability problem.To see this, suppose for the sake of contradiction, a run exists between ( 0 , , (1, 1)) and ( 1 , , ( 1 ,  2 )) for some  1 ≥ 1,  2 ≥ 1.The first step of this run has to fire either  1 or  2 by some non-zero fraction .Suppose  1 is fired.(The argument is similar for the other case).Then  gets pushed onto the stack and the value of the first counter becomes 1 − .From that point onwards, the only rules that can be fired are the ones going in and out of the state  2 , both of which do not increment the first counter.Hence, the first counter will have 1 −  as its value throughout the run, which leads to a contradiction.

UPPER BOUND FOR REACHABILITY
We first prove the NEXPTIME upper bound in Theorem 1.2.To this end, we first use a standard language-theoretic translation to slightly rephrase the reachability problem in Q + -PVASS (this slight change in viewpoint is also taken in other work on PVASS [Englert et al. 2021;Leroux et al. 2015]).Observe that when we are given two configurations  0 and  1 of a Q + -PVASS, we want to know whether there exists a sequence  ∈ (Z  ) * of update vectors such that (i) there exists a sequence  of transitions that applies , such that  is a valid run from  0 to  1 in the pushdown automaton underlying the Q + -PVASS (thus ignoring the counter updates) and (ii) there exist firing fractions for each vector in  such that adding the resulting update vectors will lead from u to v, where u, v are the vectors in the configurations  0 and  1 .Now observe that the set of words  as in condition (i) are a context-free language.Therefore, we can phrase the reachability problem in Q + -PVASS by asking for a word in a context-free language that satisfies condition (ii).
Let us make condition (ii) precise.Let Σ ⊆ Z  be the finite set of vectors that appear as transition labels in our Q + -PVASS.Given two configurations u, v ∈ Q  + and a word  =  1  2 . . .  ∈ Σ * with each   ∈ Σ, we say that u Similarly, given a language  ⊆ Σ * , we say that u for some  ∈ .By our observation above, the reachability problem in Q + -PVASS is equivalent to the following problem: We solve this problem using results about the existential fragment of the first-order theory of (Q, +, <), which we call Existential Linear Rational Arithmetic (ELRA).Our algorithm constructs an ELRA formula for the following relation   .
Definition 3.1.The reachability relation   corresponding to a context-free language  ⊆ (Z  ) * is given by The following definition of computing a formula using a non-deterministic algorithm is inspired by the definition of leaf language from complexity theory [Papadimitriou 2007].We say that one can construct an ELRA formula in NEXPTIME (resp.NP) for a relation  ⊆ Q  + if there is a non-deterministic exponential (resp.polynomial) time-bounded Turing machine such that every accepting path of the machine computes an ELRA formula such that if  1 , . . .,   are the produced formulae, then their disjunction  =1   defines the relation .Here, a formula  is said to define a relation  ⊆ Q  + if for every -tuple  ∈ Q  + , we have () holds iff  () is true of the rational numbers.
Proposition 3.2.Given a context-free language  ⊆ (Z  ) * one can construct in NEXPTIME an ELRA formula for the relation   .
Since the truth problem for ELRA formulae can be solved in NP [Sontag 1985], the NEXPTIME upper bound follows from Proposition 3.2: Our algorithm would first non-deterministically compute a disjunct  of the ELRA formula for   and then check the truth of  in NP in the size of .This is a non-deterministic algorithm that runs in exponential time.
Therefore, the remainder of this section is devoted to proving Proposition 3.2.The key difficulty lies in understanding the reachability relation along pumps, which are derivations of the form  * = ⇒  ′ for some non-terminal .
Definition 3.3.Let G be a context-free grammar over Z  and  a non-terminal in G.The pump reachability relation is defined as Theorem 3.4.Given a context-free grammar G over Z  and a non-terminal  in G, one can compute in NEXPTIME an ELRA formula for the relation   .
Before proving Theorem 3.4, we first show how Proposition 3.2 follows from Theorem 3.4.Let G be a grammar for the language  in Proposition 3.2.Consider an arbitrary derivation tree T of G.We say a derivation tree is pumpfree if along every path of the tree, every nonterminal occurs at most once.Clearly, an arbitrary derivation tree T can be obtained from a pumpfree tree by inserting "pumping" derivations of the form  * = ⇒  ′ .Since every pumpfree tree is exponentially bounded in size (since its depth is bounded by the number of nonterminals | |), there can only be exponentially many such pumps that need to be inserted for any given nonterminal .
A pump  * = ⇒  on a nonterminal  in an arbitrary derivation tree T can be replaced by additional terminal letters called pump letters (, n) and (, n) as shown in Fig. 2. Let the two occurrences of  be the first and last occurences of  along a path.Here n ∈ {0, 1} * is a vector denoting the node which is labelled by the first .Note that we assume that the grammar is in Chomsky Normal Form and hence nodes in the derivation tree can be identified in this manner since the trees are binary trees.The tree T ′ contains four children at n, with the first and fourth being pump letters and the second and third being labelled by the nonterminals ,  occurring in the production  = ⇒ .It could also be the case that the rule used is of the form  = ⇒ , in which case there are only three letters: the middle letter being  and the other two pump letters.Repeating this replacement procedure along each path, we finally obtain a pumpfree tree T which does not have a repeated nonterminal along any path.Since T is exponentially bounded in size, the number of pump terminals introduced is also exponentially bounded.In particular, every vector n ∈ {0, 1} ℎ for ℎ ≤ | | where  is the set of nonterminals of G.
The algorithm guesses an exponential sized tree T and verifies the consistency of node labels between parent and children nodes in the tree using the rule set  of the grammar.It then constructs a formula  T as follows.The formula  T contains variables for a sequence of fractions and vectors x 0 ,  1 , x 1 . . .  , x  where  is the number of leaf nodes in T .Let   be the label of the  ℎ leaf node.The constructed formula is the conjunction of the following formulae   for each leaf : • if   is a nonpump letter then   := (x  +  +1   = x +1 ), else •   is a pump letter (, n), then x  , x +1 are plugged into an instantiation of the formula obtained from Theorem 3.4 for , along with the corresponding vectors x  , x +1 for the dual letter (, n) =   to give the formula  , .In this case,   =   =  , .
The formula  T existentially quantifies over the variables  1 , . . .,   −1 as well as the firing fractions  1 , . . .,   , while  0 and   are free variables corresponding to  and  respectively.The final formula we want is T  T .

Capturing pump reachability relations
It remains to prove Theorem 3.4.The key observation is that a characterization of Blondin and Haase [2017] of the existence of "cyclic runs" (i.e.ones that start and end in the same control state) in Q + -VASS actually also applies to Q + -VASS with infinitely many control states.Thus, the first step is to translate the setting of pumps into that of cyclic Q + -VASS with infinitely many control states.
It is more convenient for us to use algebraic terminology, so we will phrase this as a translation to the case of semigroups.We say a language  ⊆ Σ * is a semigroup if it is closed under concatenation, i.e., for any ,  ∈ , we have  ∈ .We will show that for our particular cyclic Q + -VASS, the characterization of Blondin and Haase allows us to build an exponential-sized ELRA formula.
Reduction to semigroups Let us first show that the pump reachability relations   can be captured using context-free semigroups.In this section, we often write letters  in normal font, even though they are vectors in Z  .Vectors in Q  + are represented by bold font e.g.u.The following lemma uses the idea of simulating grammar derivations by applying "the left part forward" and "the right part backward", which is a recurring theme in the literature on context-free grammars (see, e.g.[Baumann et al. 2023;Berstel 1979;Lohrey et al. 2022;Reps et al. 2016;Rosenberg 1967]) and has been applied to PVASS by Leroux et al. [2015, Section 5].Lemma 3.5.Given a grammar G over Z  and a non-terminal  in G, one can compute, in polynomial time, a context-free language  ⊆ (Z 2 ) * such that (i)  is a semigroup and (ii) for any u, v, u Suppose (,  , Σ, ) is a context-free grammar in Chomsky normal form, with Σ ⊆ Z  and  ∈  .The idea is to take a derivation tree for  * = ⇒  ′ and consider the path from the root to the  in the derived word, see Fig. 3 on the left.We transform the tree as follows.Each subtree on the left of this path (ℓ 1 and ℓ 2 in the figure) is left unchanged, except that each produced vector  ∈ Z  is padded so as to obtain (, 0, . . ., 0) ∈ Z 2 .In the figure, the resulting subtrees are Each subtree on the right ( 1 and  2 in the figure), however, is moved to the left side of the path and it is reversed, meaning in particular that the word produced by it is reversed.Moreover, each vector  ∈ Z  occurring at a leaf is turned into (0, . . ., 0, −) ∈ Z 2 .
Then, every word generated by the new grammar is of the form the word produced by the original grammar.Here, for a word  ∈ (Z  ) * , − →  is obtained from  by replacing each vector  ∈ Z  in  by (, 0, . . ., 0) ∈ Z 2 , and ← −  is obtained from  by reversing the word and replacing each  ∈ Z  by (0, . . ., 0, −) ∈ Z 2 .Conversely, for every Formally, in the new grammar for , we have three copies of the non-terminals in  , hence we have The productions in the new grammar are as follows: For every production  →  in , we include productions Moreover, for every  →  with  ∈ Z  , we include the productions − →  → (, 0, . . ., 0) ∈ Z 2 and ← −  → (0, . . ., 0, −) ∈ Z 2 .Finally, we add Â →  and set the start symbol to Â. Observe that the generated language is closed under concatenation.This is because at any point in any derivation, there is exactly one hat nonterminal symbol which always occurs as the last symbol and only Â can be replaced by .Reduction to letter-uniform semigroup As a second step, we will further reduce the problem to the case where n all runs, the letters (i.e.added vectors) appear uniformly (in some precise sense).The support sequence of a word  ∈ Σ is the tuple (Γ, <) where Γ ⊆ Σ is the subset of letters occuring in  and < is a total order on Γ which corresponds to the order of first occurrence of the letters in .For example the support sequence of  consists of Γ = {, , } and the linear ordering  <  < .

𝐴
A context-free language  ⊆ (Z  ) * is letter-uniform if any two words in  have the same support sequence.Let Σ ⊆ Z  be the set of letters occurring in .Moreover, for every subset Γ ⊆ Σ and a total order . .    where   ∈ { 1 , . . .,   } * } denote the set of all words in  with support sequence (Γ, <).
Then we can observe that each  (Γ,<) is letter-uniform and also a semigroup: for any two words ,  ∈  (Γ,<) , it is the case that the letters occurring in  and  are exactly Γ and furthermore, the order of first occurrence of the letters from Γ in the two words also corresponds to the total order <.Furthermore, ,  ∈  since  is a semigroup.Hence both of these words also belong to  (Γ,<) .Moreover, we have u  − → Q + v if and only if there exists some Γ ⊆ Σ and total order < on Γ with u We shall prove the following: Proposition 3.6.Given a context-free letter-uniform semigroup  ⊆ (Z  ) * , we can in NEXPTIME construct an ELRA formula for the relation   .
Let us see how Theorem 3.4 follows from Proposition 3.6.Given some nonterminal  of a CFG, we want a formula for   .We first use Lemma 3.5 to compute a context-free language  such that   captures   (up to permuting some counters).Suppose  ⊆ Σ * for some Σ ⊆ Z  .For each subset Γ ⊆ Σ and total order < on Γ, consider the set  (Γ,<) as defined earlier.As we already observed, (i) each  (Γ,<) is a semigroup, (ii) each  (Γ,<) is letter-uniform, and (iii)  is the union of all  (Γ,<) .Therefore, our construction proceeds as follows.We guess (Γ, <) and then apply Proposition 3.6 to compute in NEXPTIME an existential FO(Q, +, <) formula for   (Γ,<) .Then, the disjunction of all resulting formulas clearly defines   .We note that given some (Γ, <), a grammar for  (Γ,<) can be constructed from the grammar for  in polynomial time.This is because we need to construct a grammar for the intersection of  with the language of the regular expression given by , which only incurs a polynomial blowup.In fact, without the linear order and only the subset Γ, the same construction would lead to an exponential blowup since we would then have to remember all possible subsets of Γ while reading a word.Characterizing reachability by three runs It remains to show Proposition 3.6.The advantage of reducing our problem to the letter-uniform case is that we can employ a characterization of Blondin and Haase [2017] about the existence of runs.In the rest of this section, we will assume that the language  comes with a corresponding support sequence (Γ, <).
The following lemma tells us that reachability along a letter-uniform semigroup can be characterized by the existence of three runs: One run that witnesses reachability under Q-semantics, and two runs that witness "admissibility" in both directions.Here, the "only if" direction is trivial, because the run from u to v along  is a run of all three types.For the converse, we use the fact that  is a semigroup and letter-uniform to compose the three runs into a run under Q + -semantics.Lemma 3.7.Let  ⊆ (Z  ) * be a letter-uniform semigroup.Then we have u Lemma 3.7 is an extension of [Blondin and Haase 2017, Proposition 4.5].The only difference is that in [Blondin and Haase 2017],  is given by a non-deterministic finite automaton where one state is both the initial and the final state.The "only if" direction is trivial.For the "if" direction, the proof in [Blondin and Haase 2017] takes the three runs and shows that a suitable concatenation of these runs, together with an appropriate choice of multiplicities, yields the desired run u Since  is a semigroup, the same argument yields Lemma 3.7.See Subsection A.1 of the appendix for details.
Lemma 3.7 allows us to express the reachability relation along : It tells us that we merely have to express existence of the three simpler types of runs.The first of the three runs is reachability under Q-semantics while the second and third are examples of admissible runs under Q + -semantics.Thus we need to characterize these two types of runs.We will do this in the following two subsections.Characterizing reachability under Q-semantics We first show how to construct an ELRA formula for the Q-reachability relation along a letter-uniform context-free .
Lemma 3.8.Given a letter-uniform context-free language  ⊆ (Z  ) * , we can construct in exponential time an ELRA formula for the relation Our proof relies on the following, which was shown in [Blondin and Haase 2017, Proposition B.4]: Lemma 3.9 (Blondin and Haase [2017]).Given an NFA A over some alphabet Proof of Lemma 3.8.The key observation is that in the case of Q-semantics, reachability along a word  ∈ (Z  ) * does not depend on the exact order of the letters in .Let Ψ() ∈ N |Σ| be the Parikh image of  i.e.Ψ() () for  ∈ Σ denotes the number of times  occurs in .Formally, if We use this to reduce the case of context-free  to the case of regular languages .
It is well known that given a context-free grammar, one can construct an NFA of exponential size such that the NFA accepts a language of the same Parikh image as the grammar.For example, a , Vol. 1, No. 1, Article .Publication date: November 2018.
simple construction with a close-to-tight size bound can be found in [Esparza et al. 2011].Therefore, given , we can construct an exponential-sized NFA A such that Ψ((A)) = Ψ().

Observe that Ψ(𝐿(
Therefore, we apply Lemma 3.9 to compute a formula  from A. Since A is exponential in size, this computation takes exponential time and results in an exponential formula .Then, for We note that it is also possible to use a construction of [Verma et al. 2005] to construct a formula for  Q  in polynomial time -the reason we chose the current presentation is that applying the result from [Verma et al. 2005] as a black box would result in a formula over mixed integer-rational arithmetic (of polynomial size): One would use integer variables to implement the construction from [Verma et al. 2005] and then rational variables to account for continuous semantics.This would yield the same complexity bound in the end (existential mixed linear arithmetic is still in NP), but we preferred not to introduce another logic.Characterizing admissibility under Q + -semantics Finally, we construct an ELRA formula for the set of vectors u such that there exists a v We call such vectors -admissible and denote the set of -admissible vectors as   .The key observation is that u is -admissible if and only if the total order < satisfies some simple properties.Intuitively, u is <-admissible if for each letter that decrements a counter, either (i) that counter is positive in u or (ii) there is an earlier letter that increments this counter.For  ∈ Z  , we denote by  (resp. + or  − ) the subset of indices  where () ≠ 0 (resp.() > 0 or () < 0).Formally, u is <-admissible if for each  ∈ Γ and each  ∈  − , we either have (i) u( ) > 0 or (ii) there is a  ∈ Γ with  <  and  ∈  + .We show the following: Lemma 3.10.Let  be letter-uniform and  ≠ ∅.Then u ∈   if and only if u is <-admissible.
The "if" direction is easy.If  is not < −, this means that there is some index  and some letter   such that   decrements ,  ( ) = 0 and   ( ) ≤ 0 for all  < .This means that starting from , on any word  ∈ , we would go below 0 in index  when   first occurs in .Hence  ∉ .
We will show the following for every : There exists a v  such that u We proceed by induction on .For  = 0, the statement clearly holds, because u is positive on all co-ordinates in  0 = u + and u ∈ Q + .Now suppose there is a run u The formula  < , where  < (u) for a vector u ∈ Q  + is true iff u is <-admissible can be written as: Proof of Proposition 3.6.We are now ready to prove Proposition 3.6.By Lemma 3.7, it suffices to show that there are formulae for the relations  Q  and   .These formulae have been obtained in Lemma 3.8 and Lemma 3.11 respectively.□ This concludes the proof that reachability in Q + -PVASS is in NEXPTIME.

State reachability
The material in this section also allows us to derive Theorem 1.3.Since statereachability is NP-hard already for Q + -VASS [Blondin and Haase 2017], we only have to show membership in NP.Using the language-theoretic translation from the beginning of this section, state reachability can be phrased as: Given a set of vectors Σ ⊆ Z  , a letter-uniform context-free language  ⊆ Σ * (which comes with an associated (Γ, <)) and u ∈ Q  + , decide if u is -admissible.This is because, given an arbitrary context-free language , we can see it as the disjoint union of (exponentially) many  (Γ,<) each of which is letter-uniform.Furthermore, we can construct each  (Γ,<) in polynomial time and check if they are non-empty.The NP upper bound then follows from Lemma 3.11: We can guess (Γ, <), construct  (Γ,<) and Lemma 3.11 lets us build an ELRA formula for   (Γ,<) .Since the truth problem for ELRA is in NP [Sontag 1985], we can then check whether u satisfies the constructed formula for   .

RL
We now move on to proving the NEXPTIME-hardness of reachability in Q + -PVASS.As outlined in the introduction, we do this by a chain of reductions.Our reduction chain starts with the machine model 2CM •2,+1 RL .Informally, a 2CM •2,+1 RL has a finite-state control along with two counters, each of which can hold a non-negative integer.A rule of the machine allows us to move from one state to another whilst either incrementing the value of a counter by 1 or doubling the value of a counter.The set of final configurations of such a machine will be given by a final state and an equality condition on the two counters.
Formally, a 2CM •2,+1 RL is a tuple M = (,   ,   , Δ) where  is a finite set of states,   ,   ∈  are the initial and the final states respectively, and Δ ⊆  × {inc 0 , inc 1 , double 0 , double 1 , nop} ×  is a finite set of rules.A configuration of M is a triple (,  0 ,  1 ) where  ∈  is the current state of M and  0 ,  1 ∈ N are the current values of the two counters respectively.Let  = (, ,  ′ ) ∈ Δ be a rule.A step from a configuration  = (,  0 ,  1 ) to another configuration We then say that a configuration  can reach another configuration  ′ if  ′ can be reached from  by a sequence of steps.The initial configuration of M is   := (  , 0, 0).The set of final configurations of M is taken to be {(  , , ) :  ∈ N}.
The reachability problem for 2CM •2,+1 RL asks, given a 2CM •2,+1 RL M and a number  in binary, if the initial configuration can reach some final configuration in exactly  steps.
Example 4.1.Let us consider the 2CM •2,+1 RL given in Figure 4, which we shall denote by M. The initial state is  0 and the final state is  2 .Note that the initial configuration ( 0 , 0, 0) can reach ( 2 , 1, 1) in exactly 2 steps.Hence if we set the length of the run  to 2, then the instance ⟨M, ⟩ is a positive instance of the reachability problem for 2CM value of , the instance ⟨M, ⟩ is a negative instance of the reachability problem for 2CM •2,+1 RL .Indeed, first note that in this 2CM •2,+1 RL , each state has exactly one outgoing transition.Hence, there is exactly one run starting from ( 0 , 0, 0) and that run is as follows: First it reaches the configuration ( 2 , 1, 1) in exactly 2 steps.Then from there, it follows the following (cyclical) pattern.
This pattern indicates that after the configuration ( 2 , 1, 1), whenever the run reaches the state  2 , the first counter has an odd value, whereas the second counter has an even value.Hence, the run will never reach a final configuration and so ⟨M, ⟩ is a negative instance of the reachability problem whenever  ≠ 2.
Theorem 4.2 is shown using a bounded version of the classical Post Correspondence Problem (PCP).Recall that, in the PCP problem, we are given a set of pairs of words ( 1 ,  1 ), ( 2 ,  2 ), . . ., (  ,   ) over a common alphabet Σ and we are asked to decide if there is a sequence of indices  1 ,  2 , . . .,   for some  such that It is well-known that this problem is undecidable [Sipser 2012].For our purposes, we shall use a bounded version of PCP, called bounded PCP, defined as follows.
Input: A set of pairs of words ( 1 ,  1 ), ( 2 ,  2 ), . . ., (  ,   ) over an alphabet Σ such that none of the given words is the empty string, and a number ℓ encoded in binary.Question: Is there a sequence of indices  1 ,  2 , . . .,   such that and the length of Note that this problem is decidable -we simply have to guess a sequence of indices of length at most ℓ and check that the resulting words from these indices satisfy the given property.In [Aiswarya et al. 2022, Section 6.1], Bounded-PCP was shown to be NEXPTIME-hard.We now prove Theorem 4.2 by giving a reduction from Bounded-PCP to the reachability problem for 2CM •2,+1 RL .Let ( 1 ,  1 ), . . ., (  ,   ) be a set of pairs of words over a common alphabet Σ and let ℓ ∈ N. Without loss of generality we assume that |Σ| = 2  for some  ≥ 1, for instance, by adding at most twice as many dummy letters as the size of the alphabet.With this assumption, there are two essential ideas behind this reduction, which we now briefly outline.
The first idea is as follows: Since the size of Σ is 2  , we can identify Σ with the set {0, 1, . . ., 2  −1}, by mapping each letter in Σ to some unique number in {0, 1, . . ., 2  − 1}.This identification means that, any non-empty word  represents a number  in base |Σ| in the most significant bit notation.In this way, to any number  we can bijectively map a non-empty word .
The second idea is as follows: Assume that we have a word  and its corresponding number .Suppose we are given another word  ′ and we are asked to compute the number corresponding to the concatenated word  •  ′ .We can do that as follows: Let  ′ =  ′ 1 , . . ., and so   is the representation for the word  •  ′ .
These two ideas essentially illustrate the reduction from the Bounded-PCP problem to the reachability problem for 2CM •2,+1 RL .Given a Bounded-PCP instance ⟨( 1 ,  1 ), . . .(  ,   ), ℓ⟩, we construct a 2CM •2,+1 RL as follows: Initially it starts at an initial state   with both of its counters set to 0. From here it executes a loop in the following manner: Suppose at some point, the machine is at state   with counter values  1 and  2 corresponding to some strings  1 and  2 respectively.Then the machine picks some index between 1 and  and then by the idea given in the previous paragraph, it updates the values of its counters to  ′ 1 and  ′ 2 corresponding to the strings  1 •   and  2 •   , respectively and then comes back to the state   .
We can hard-code the rules in this machine so that whenever it has the representation for two strings ,  ′ in its counters and it wants to compute the representation for  •   and  ′ •   for some 1 ≤  ≤ , it takes exactly  steps for some  which is polynomial in the size of the given Bounded-PCP instance.Then clearly, reaching a configuration (  , , ) for some number  in the machine in exactly ℓ steps is equivalent to finding a sequence of indices  1 , . . .,   such that RL TO [0, 1]-VASS 0?

RL
The next step in our reduction chain moves from RL has a finite-state control along with some number of continuous counters, each of which can only hold a fractional number belonging to the interval [0, 1].A rule of such a machine allows us to move from one state to another whilst incrementing or decrementing some counters by some fractional number.Further a rule can also specify that the effect of firing that rule makes some counters 0, thereby allowing us to perform zero-tests.Note that a [0, 1]-VASS 0? RL is different from Q + -VASS in two aspects: First, the counters of a [0, 1]-VASS 0?
RL can only hold numbers in [0, 1], whereas the counters of a Q + -VASS can hold any rational number.Second, the counters of a [0, 1]-VASS 0?
RL can be tested for zero, which is not possible in a Q + -VASS.We now proceed to formally define the model of a [0, 1]-VASS 0?
RL is NEXPTIME-hard.We prove this theorem by exhibiting a reduction from the reachability problem for 2CM the largest value we can attain in any counter during a run of length  is at most 2  (in fact, the bound is 2 −1 ).Hence, we shall implicitly assume that the set of configurations of M that are under consideration are those where the counter values are bounded by 2  .Overview of the reduction.We want to construct an [0, 1]-VASS 0? RL C that simulates M. As already mentioned in the introduction, we use exponential precision and represent a discrete counter value  in a configuration of M as the value  2  in a continuous counter of C. Furthermore, we want to correctly simulate increment and doubling operations on M which correspond to addition of 1 2  and doubling in C respectively.Since we do not control the fraction  in a rule, we have to overcome the following challenge: (C1) How can we create gadgets which simulate addition of 1 2  and doubling?Towards solving this challenge, we use the following idea: Suppose we are in some configuration  and suppose we want to make a step from  by adding 1 2  to a counter .Assume that there are two other counters  and  whose values in  are 1 2  and 0, respectively.Suppose  is a rule which decrements  by 1, increments  and  both by 1 and then checks that the value of  (after firing 2  .This is because, by assumption, before firing this rule the value of  was 1 2  and after firing this rule, the zero-test ensures that the value of  is 0. Hence, the only possible value that  can take is 1 2  .Therefore, this rule allows us to add 1 2  to the counter .However, note that after firing  , the values of  and  are reversed, i.e., the values of  and  are 0 and 1 2  , respectively.This is undesirable, as we might once again want to use  to simulate addition by 1 2  .Therefore, we add another rule  , which decrements  by 1, increments  by 1 and then checks that the value of  (after firing  ) is 0.Then, a successful firing of the rule  by some fraction  means that  = 1 2  (due to the same reasons as above) and so this would mean that the values of  and  after firing  would again become 1 2  and 0, respectively.Hence, the counter  essentially acts as a temporary holder of the value of  and allows us to "refill" the value of .
Generalizing this technique allows us to control the firing fraction to perform doubling as well.However, this technique has a single obstacle, which we now address.
For this technique to work, we need a counter  initially which stores the value 1 2  .It might be tempting to simply declare that the value of  in the initial configuration is 1 2  .However, this cannot be done, because the number  is given to us in binary and so the number of bits needed to write down the number 1 2  is exponential in the size of the given input ⟨M, ⟩, which would not give as a polynomial-time reduction.This raises the following challenge as well: (C2) How can we create a value of 1 2  in a continuous counter?We show that challenge C2 can also be solved by our idea of controlling the firing fraction.Solving Challenge (C1).From the 2CM •2,+1 RL M = (,   ,   , Δ), we construct a [0, 1]-VASS 0? RL C 0 as follows.C 0 will have 4 counters  0 ,  1 , , , i.e., it will be 4-dimensional.Intuitively, each   will store the value of one of the counters of M,  will store the value 1/2  that will be needed for simulating the addition operation, and  will be used to temporarily store the values of  0 ,  1 and  at some points along a run.A rule in C 0 consists of a vector  ∈ Z 4 and a subset  ⊆ {1, 2, 3, 4}.For ease of reading, we write the vector  as a sequence of increment or decrement operations += (or -=) whose intended meaning is that counter  is incremented (or decremented) by , followed by a sequence of zero-tests.For example,  = (, ) where  = (1, 0, 0, −2) and  = {1, 3} is represented by  0 +=1, -=2;  0 = 0?,  = 0?.C 0 will have all the states of M and in addition, for every rule  of M, it will have a state   .The set of rules of C 0 will be given as follows.(d) The "finish" gadget.
Note that for every rule  of M, the corresponding gadget in C 0 has exactly two rules, where the first rule (from  to   ) will be denoted by   and the second rule (from   to  ′ ) by   .We would now like to show that the rules of M are simulated by their corresponding gadgets.To this end, we first define a mapping  from configurations of M to configurations of C 0 as follows: If  = (,  0 ,  1 ), then () is the configuration of C 0 such that state(()) = , () We now have the following "gadget simulation" lemma, which solves Challenge (C1).
Lemma 5.2 (Gadget Simulation).Suppose  is a configuration and  is a rule of M. Proof sketch.We have already discussed the case of increments in some detail before and so we will concentrate on when  is a doubling rule of the form (, double  ,  ′ ).The soundness part can be easily obtained by setting  = () (  ) and  = 2 • () (  ).For completeness, note that since   has a zero-test on   , it must be that  = () (  ).Hence, after firing   , the value of  must be 2 • () (  ).Now since   has a zero-test on , it must be that  = 2 • () (  ).So the net effect of firing   ,   is to make the value of   to be 2 • () (  ).Hence, if we let  ′ be such that   − →  ′ in M, it can be verified that ( ′ ) = .For more details, see Subsection B.1 of the appendix. □ The "finish" gadget.Before, we solve Challenge (C2), we make a small modification to C 0 .Recall that in M, we have a set of final configurations given by  := {(  , , ) :  ≤ 2  }, whereas in a [0, 1]-VASS 0? RL , we are allowed to specify only one final configuration.However, the [0, 1]-VASS 0? RL C 0 only promises us that the initial configuration   of M can reach some configuration in  in  steps iff (  ) can reach some configuration in the set {() :  ∈  } in 2 steps.Hence, we need to make a modification to C 0 which allows us to replace the set of configurations with a single final configuration.To this end, we modify C 0 by adding the "finish gadget" from Figure 5d, where  ′  and   are two fresh states and the first and the second rule are respectively denoted by   and   .Let us call the resulting [0, 1]-VASS 0?
RL as C 1 .Note that the effect of firing   is to set the values of  and  to 0. Further, if   is fired, then  0 and  1 are decremented by the same amount and both of them are tested for zero.This means that ++, -=2;  = 0? ++, --;  = 0? ++, ++, --;  = 0?
could be fired successfully only if the counter values of  0 and  1 at state  ′  are the same and the effect of firing   is to set the values of  0 and  1 to 0. This observation along with repeated applications of the Gadget Simulation lemma give us the following Simulation theorem.
Theorem 5.3 (Simulation Theorem).The initial configuration   of M can reach a final configuration in  steps iff (  ) can reach the configuration (  , 0) in 2 + 2 steps in C 1 .
The full proof of this theorem can be found in Subsection B.2 of the appendix.We now move on to solving Challenge (C2).Solving Challenge (C2).Thanks to the Simulation theorem, the required reduction is almost over.As we had discussed before, the only remaining part is that since (  ) () = 1/2  and  is already given in binary, we cannot write down (  ) in polynomial time.To handle this challenge (Challenge (C2)), we construct an "initialization" gadget which starts from a "small" initial configuration and then "sets up" the configuration (  ).
The initialization gadget is shown in the Figure 6.The gadget shares the counters  and  with C 1 and has two new counters  and .Initially, the gadget will start in  0 and will have the values 1, 0, 1/ and 0 in , ,  and  respectively.In each iteration of the gadget, the value of  will be halved.The function of  is to store the value 1/ and the function of  is to count the number of executions of this gadget.Initially the value of  is 0 and in every iteration its value will increase by 1/.Hence, if we finally require the value of  to be 1, then we would have executed this gadget precisely  times, thereby setting the value of  to 1/2  .
The following lemma ,whose full proof could be found in Subsection B.3 of the appendix, follows from an analysis of the initalization gadget, similar to the one for the Gadget Simulation lemma.
RL C as follows: We take the initialization gadget and the [0, 1]-VASS 0?
RL C 1 and we add a rule from  0 to   which does not do anything to the counters.Intuitively, we first execute the initialization gadget for some steps and then pass the control flow to C 1 .We let   be the configuration of C whose state is  0 and whose counter values are all 0, except for   () = 1/ and   () = 1.Then, we let    be the configuration of C whose state is   and whose counter values are all 0, except for    () = 1/ and    () = 1.If we encode   and    in binary, then they can be written down in polynomial time.Since    () = 1, when the control flow passes from the initialization gadget to C 1 , the value of  must be 1/2  , which is exactly what we want.Theorem 5.5.(  ) can reach the configuration (  , 0) in the [0, 1]-VASS 0?
The full details behind this theorem can be found in Subsection B.4 of the appendix.Combining this theorem with Theorem 5.3, proves the correctness of our reduction.
RL TO Q + -PVASS We now move on to the next step in our reduction chain with the following problem called the reachability problem for Q + -VASS RL , defined as follows: Given a Q + -VASS M, two configurations  init ,  fin , and a number  in binary, whether one can reach  fin from  init in exactly  steps.Theorem 6.1.The reachability problem for Q + -VASS RL is NEXPTIME-hard.
We prove this theorem by giving a reduction from the reachability problem for [0, 1]-VASS 0?
RL C, two of its configurations  init ,  fin , and a number .Without loss of generality, we assume that every rule in C performs at least one zero-test.Overview of the reduction.We want to construct a Q + -VASS M that simulates C for  steps.The primary challenge that prevents us from doing this is the following: (D1) How can we create gadgets to simulate exactly  zero-tests of C in M?
We circumvent this challenge as follows: We know that in a [0, 1]-VASS 0?
RL , the value of every counter will always be in the range [0, 1].Hence, for every counter , we introduce another counter x, called the complementary counter of  and maintain the invariant  + x = 1 throughout a run.Then testing if the value of  is 0, amounts to testing if the value of x is at least 1.This allows us to replace a zero-test with a greater than or equal to 1 (geq1) test.
The latter can be implemented as follows: If  and  ′ are rules which decrement and increment x by 1 respectively and  1 − →  ′ 1 ′ − − →  ′′ is a run, then we know that the value of x in  is at least 1, which lets us implement a geq1 test.Note that for this to succeed, we require that both  and  ′ are fired completely, i.e., with fraction 1.
To sum this up, this means that if were to simulate a rule  = (, (, ),  ′ ) of the [0, 1]-VASS 0? RL C in our new machine with the complementary counters, we need one rule to take care of the updates corresponding to  and two rules to take care of geq1 tests corresponding to the zero tests in , both of which must be fired completely.Hence, simulating  steps of C in our new machine requires 3 steps, of which exactly 2 steps must be fired completely.This leads us to (D2) How can we force the rules corresponding to geq1 tests to be fired completely, for exactly 2 times?
To solve this challenge, we introduce another counter , called the controlling counter.We modify every rule corresponding to a geq1 test to also increment the value of the counter  by 1.This means that, if  is a run of 3 steps such that the value of  after  is exactly 2, then every rule corresponding to a geq1 test must have been fired completely along the run .Formal construction.Having given an informal overview of the reduction, we now proceed to the formal construction.We are given an [0, 1]-VASS 0? RL C and a number  in binary.From the [0, 1]-VASS 0?
RL C, we will construct a Q + -VASS M as follows.For every counter  of C, M will have two counters  and x.Every transition that increments  will decrement x by the same amount and vice-versa, so that the sum of the values of  and x will be equal to 1 throughout.Further, M will have another counter , called the controlling counter.
Suppose  := (, ,  ′ ) is a rule of C such that  = (, ).Denote by w the vector such that  () = 0 and for every counter  of C, w () =  () and w ( x) = − ().Then corresponding to the rule  in C, M will have the gadget in Figure 7, whose first, second and third rules will be denoted by   ,   and   respectively.For any configuration  of C, let  () denote the set of configurations of M such that  ∈  () iff state() = state(),  () =  () and  ( x) = 1 −  () for every counter  of C. Note that any two configurations in  () differ only in their value of the counter .For any number , let  ()  denote the unique configuration in  () whose  value is .The following lemma is a consequence of the discussion given in the overview section., whose full proof could be found in Subsection C.1 of the appendix.Lemma 6.2 (Control counter simulation).The full proof of the above theorem can be found in Subsection C.2 of the appendix.Example 6.4.Let us see a concrete application of this reduction on some example.To this end, consider the [0, 1]-VASS 0?
RL given in Figure 8.Note that this is essentially a renamed version of the "increment(i)" gadget described in Figure 5a.We consider this version here since it makes it easier to describe the effect of our reduction.The result of the application of the reduction on this [0, 1]-VASS 0?
RL given in Figure 8 with counter values ,  and 0 for the counters ,  and , respectively.From the argument given in the previous section, we know that if we fire the [0, 1]-VASS 0?
RL in Figure 8 once, then we will reach the state  2 with counter values  + ,  and 0 for the counters ,  and , respectively.Now, suppose we start in  0 in the Q + -VASS RL given in Figure 9 with counter values , 1 −, , 1 − , 0, 1 and 0 in , , , , ,  and , respectively.From the reduction, we know that if we fire the gadget in Figure 9 once, and reach the state  2 with counter value 4 for the controlling counter , then the counter values for counters , , , , ,  are  + , 1 −  − , , 1 − , 0 and 1 respectively.

Wrapping up
We now provide the final steps to prove that reachability for Q + -PVASS is NEXPTIME-hard.To do this, we recall a well-known folklore fact about pushdown automata.It essentially states that we can implement a binary counter in a PDA.
RL given in Figure 8.
Lemma 6.5.For any number , in polynomial time in log(), we can construct a PDA   of bounded stack-height and two configurations  and  ′ such that there is exactly one run from  to  ′ .Moreover, this run is of length exactly .
Proof.The essential idea is to use the stack to do a depth-first search of a binary tree of size  ().At each point, the PDA will only store at most  (log ) many entries in its stack, because the depth of the tree is  (log ).We now give a more precise construction.
Note that when  = 1,  1 can simply be taken to be a PDA with two states and a single transition between the first state and the second state which does nothing to the stack.Now, let us consider the case when  > 1 is a power of 2, i.e.,  = 2  for some .Consider the following PDA   with  stack symbols   1 ,   2 , . . .,    .  starts in the state   with the empty stack.It then moves to state   while pushing   1 onto the stack.The state   has  self-loop transitions as follows: For each 1 ≤  < , the  ℎ self-loop pops    and pushes   +1 twice.Further, the  ℎ self-loop simply pops    .It can be easily verified that starting from state   with the empty stack, there is exactly one path to the configuration whose state is   and whose stack is empty.Moreover this path is of length exactly .This is because the desired path is essentially the depth-first search traversal of a binary tree of size  − 1, where the root is labelled by   1 and each node at height  is labelled by   +1 .Due to the depth-first search traversal, the number of elements stored in the stack at any point during the run is  ().
Now for the general case, suppose  = 1≤ ≤ 2   for some  with the empty stack, there is exactly one path to the configuration whose state is  +1  and whose stack is empty and also that this path is of length exactly . □ We now give a reduction from reachability for Q + -VASS RL to reachability for Q + -PVASS.Let M = (, , Δ) be a Q + -VASS such that   and    are two of its configurations and let  be a number, encoded in binary.Construct the pair (  , ,  ′ ) as given by the Folklore lemma 6.5.We now take the usual cross product, i.e., the Cartesian product between   and M, to obtain a Q + -PVASS C. (This operation is very similar to taking the cross product between a PDA and an NFA).Intuitively, the PDA part of C corresponds to simulating a binary counter, counting till the value  and the Q + -VASS part of C corresponds to simulating the Q + -VASS M.
Let u (resp.v) be the configuration of C such that state(u) = (state(), state(  )), stack(u
RL to Q + -VASS RL used 2 + 1 counters where  is the number of counters of the [0, 1]-VASS 0?
RL instance.Finally, the reduction from Q + -VASS RL to Q + -PVASS did not add any new counters.It follows that the lower bound already holds for Q + -PVASS of dimension 13.
We can go one step further.Similar to reachability in Q + -VASS RL we can define coverability in Q + -VASS RL , where want to cover some configuration in a given number of steps.Let us inspect the Q + -VASS RL M that we constructed in Section 6.We claim that  (  ) 0 can reach  (   ) 2 in 3 steps iff  (  ) 0 can cover  (   ) 2 in 3 steps.The left-to-right implication is trivial.For the other direction, notice that in any run of 3 steps in M starting from  (  ) 0 , the value of  can be increased by at most 2.Further, for every counter  ≠ , we maintain the invariant  + x = 1 throughout.It then follows that the only way to cover  (   ) 2 in 3 steps is by actually reaching  (   ) 2 .Hence, coverability in Q + -VASS RL is also NEXPTIME-hard.Since the reduction in Section 6.1 preserves coverability, we obtain: Theorem 7.1.The coverability problem for 13-dimensional Q + -PVASS is NEXPTIME-hard.
Let us now consider the encoding of the numbers that we use.It can be easily verified that in the final Q + -PVASS instance that we construct using our chain of reductions from Sections 4 till 6, all the numbers are fixed constants, except for the numbers appearing in the initial and final configurations, which are encoded in binary.Hence, the above theorem holds for 13-dimensional Q + -PVASS where the numbers are encoded in binary.We show that it is possible to strengthen this result to unaryencoded numbers at the cost of increasing the number of counters by a constant.More specifically, in Section D of the appendix, we present an alternate reduction, which given an instance of reachability in [0, 1]-VASS 0?
RL over  counters produces an instance of coverability in Q + -VASS RL over 10 + 25 counters where all numbers are encoded in unary.(We have already discussed the idea of this reduction in the Introduction).Since the proof in Section 5 shows that reachability in [0, 1]-VASS 0?
RL is NEXPTIME-hard already over 6 counters, this will prove that coverability in Q + -VASS RL over 85 counters where all numbers are encoded in unary is also NEXPTIME-hard.Since the reduction given in Section 6.1 from Q + -VASS RL to Q + -PVASS produces a Q + -PVASS of bounded stack-height, does not add any new counters and does not change the encodings of the numbers, we can now conclude the following theorem.
Theorem 7.2.The coverability problem for Q + -PVASS is NEXPTIME-hard, already over Q + -PVASSes of dimension 85, bounded stack-height, and when all numbers are encoded in unary.
This hardness result is very strong, as it simultaneously achieves coverability, bounded stack, constant dimensions, and unary encodings.In contrast, in NEXPTIME, we can decide reachability of Q + -PVASS over arbitary dimension, even when all the numbers are encoded in binary.
Finally, the reduction from Q + -VASS RL to Q + -PVASS in Section 6.1 only used the fact that for every , (1) there is a PDA of size  (log()) which can "count" exactly till  and (2) we can take product of a PDA with a Q + -VASS.For any model of computation that satisfies these two constraints, , Vol. 1, No. 1, Article .Publication date: November 2018.
the corresponding reachability problem over continuous counters should also be NEXPTIME-hard.For instance, if we replace a stack in Q + -PVASS with Boolean programs to define Boolean programs with continuous counters then their reachability and coverability problems are also NEXPTIME-hard.A similar result also holds when we replace the stack with a (discrete) one-counter machine which can only increment its counter and whose accepting condition is reaching a particular counter value given in binary.For both models, the reachability and coverability problems must also be in NEXPTIME, because the former can be converted into an exponentially bigger Q + -VASS, for which these problems are in NP [Blondin and Haase 2017, Theorem 4.14].

CONCLUSION
We have shown that the reachability problem for continuous pushdown VASS is NEXPTIMEcomplete.While our upper bound works for any arbitrary number of counters, our lower bound already holds for the coverability problem for continuous pushdown VASS with constant number of counters, bounded stack-height and when all numbers are encoded in unary.
As part of future work, it might be interesting to study the complexity of coverability reachability for continuous pushdown VASS over low dimensions.It might also be interesting to study the coverability and reachability problems for extensions of continuous pushdown VASS.For instance, it is already known that reachability in continuous VASS in which the counters are allowed to be tested for zero is undecidable [Blondin and Haase 2017, Theorem 4.17].It might be interesting to see if this is also the case when the continuous counters are endowed with operations such as resets or transfers.Finally, it would be nice to extend the decidability result here to other machine models, such as continuous VASS with higher-order stacks.
then there is  ′ ∈  ⋆ such that (, ) The key idea is to show that we can repeatedly fire a small fraction of  from (1) is order to get the desired  ′ .Conditions (2) and (3) ensure that there exists a small enough fraction  for which, when  is fired, then we obtain a run with Q + semantics.Define () and  () capture the total negative positive effect on counter  by the transitions in .Let One can show that each subrun For the case where  = 0 (resp. =  − 1), this follows from the fact that 1  is chosen smaller than   (resp.  ).For other values of , one shows that The next lemma observes that if there is a run (, )  − → Q + (, ) then there is a run reaching  which satisfies a support condition.Lemma A.4 (Lemma 4.4, [Blondin and Haase 2017]).
2    satisfies the conditions.By halving the firing fraction each time, we can ensure that if a transition removes tokens from a place that was positive in , we retain some tokens in this place in  ′ .If a transition puts tokens in a place, we also retain some fraction of it in  ′ .
We can now prove Lemma A.1.By using Lemma A.4, we obtain Similarly we obtain (, ) We can now fire this small fraction  of  ′  and  ′  in order to arrive at vectors which satisfy the conditions (2) and (3) of Lemma A.3: where  ⊆  ′ , and Furthermore, we have: Applying Lemma A.3 to the above run, we get: and the desired run B PROOFS FOR SECTION 5 B.1 Proof details of Lemma 5.2 The lemma is immediate when  is a nope rule of the form (, nop,  ′ ).We will first prove it in the case when  is an increment rule of the form (, inc  ,  ′ ).For the soundness part, suppose To prove the completeness part, suppose for some , ,  and  ′ .Since   decrements  by 1 and also has a zero-test on , it follows that () () −  = 0 and so  = () () = 1/2  .By construction of the rule   , it follows that the values of the counters   ,  1− ,  and  in  ′ are () (  ) + 1/2  , () ( 1− ), 0 and 1/2  respectively.Now, since   decrements  by 1 and also has a zero-test on , it follows that  ′ () −  = 0 and so  =  ′ () = 1/2  .By construction of the rule   , it follows that state() =  ′ and the values of the counters   ,  1− ,  and  in  are  (  ),  ( 1− ), 1/2  and 0 respectively.Since  (  ) = () (  ) + 1/2  and  ( 1− ) = () ( 1− ), it follows that if we let   − →  ′ in the machine M, then ( ′ ) = .For the case when  is a doubling rule of the form (, double  ,  ′ ), a similar argument applies, where  and  (in both the soundness and the completeness part) will be () (  ) and 2() (  ) respectively.

B.2 Proof details of Theorem 5.3
Suppose   can reach a final configuration    = (  , , ) for some number  in  steps.By repeated applications of the soundness part of the Gadget Simulation lemma (Lemma 5.2), it follows that (  ) can reach  := (   ) in 2 steps.Since state() =   and  ( 0 ) =  ( 1 ), it follows that if we set  =  (),  =  ( 0 ), then by construction of   and   we have     ,   − −−−−− → (  , 0).Suppose (  ) can reach the configuration (  , 0) in 2( + 1) steps in C 1 .By construction of C 1 , any execution from  := (  ) to  ′ := (  , 0) must be of the form for some fractions  1 ,  1 , . . .,  +1 ,  +1 and some rules  1 , . . .,   of the machine M. By the completeness part of the Gadget Simulation lemma (Lemma 5.2), it follows that there exists configurations  1 ,  2 , . . .,   such that each   = (  ) and Since  +1   is fireable from   , it must be the case that state(  ) =   .By construction of the finish gadget, it follows that   ( 0 ) =   ( 1 ).By construction of the mapping , it follows that   is of the form (  , , ) for some number .Hence,   can reach some final configuration in  steps in M.

B.3 Proof details of Lemma 5.4
Suppose  ′ is the same as  except that  ′ () =  ()/2 and  ′ () =  () + 1/.Let a run such that  () = 0 and  () = 1/.Since  1 decrements  by 2 and also has a zero-test on , it must be the case that  1 =  ()/2.Hence, Since  2 decrements  by 1 and also has a zero-test on , it must be the case that  2 =  1 () =  ()/2.Hence, Since  3 decrements  by 1 and also has a zero-test on , it must be the case that Since  4 decrements  by 1 and also has a zero-test on , it must be the case that  4 =  3 () = 1/.Hence, Hence, the proof is complete.

B.4 Proof details of Theorem 5.5
Suppose (  ) can reach (  , 0) in C 1 in 2( + 1) steps by means of a run .By applying the Initialization lemma  times and then using the rule from  0 to   which does nothing, it follows that starting from   we can reach the configuration  where state() =   and  ( 0 ) =  ( 1 ) = 0,  () = 1/2  ,  () = 0,  () = 1/ and  () = 1.Since C contains all the rules of C 1 and since (  ) and  agree on their states and the counter values of  0 ,  1 ,  and , we can simply execute the run  from the configuration  in C to reach    .
Suppose   can reach    in exactly ℓ : →    :=  ℓ be such a run.By construction of the [0, 1]-VASS 0? RL cc, there must be exactly one index  such that state(  ) =   and state(  −1 ) =  0 and the rule fired between   −1 and   does nothing to the counters.By construction of C, this means that the steps from  0 to   −1 are steps in the initialization gadget and the steps from   −1 to  ℓ are steps in C 1 .Since no step in C 1 affects the value of the counter  and since    () is 1, it follows that   −1 () = 1.Since the steps from  0 to   −1 are steps in the initialization gadget and since state( 0 ) = state(  −1 ) =  0 , by repeated applications of the Initialization lemma, it follows that  − 1 must be 4 and the values of the counters in   −1 satisfy Since the counter values of   and   −1 are the same, the same equation holds if we replace   −1 with   .Notice that state(  ) =   ,   ( 0 ) =   ( 1 ) =   () = 0,   () = 1/2  .Since the run from   to    uses only the rules from C 1 , it follows that executing the same sequence of rules also gives us a run from (  ) to (  , 0) in C 1 of length ℓ −  = ℓ − 4 − 1 = 2( + 1).
Stage 3: Hardness of coverability in Q + -VASS RL with unary encodings using amplifiers.
In the next stage, assuming the existence of objects called amplifiers, we prove our main result, that the coverability problem for Q + -VASS RL is NEXPTIME-hard even for a constant number of counters and even when all the numbers are encoded in unary.We proceed to first define the notion of an amplifier.
Let  1 ,  2 , . . .,   be natural numbers, all of them encoded in binary using exactly  bits, and let  be a number such that each   ≤ 2  .Assume the following theorem.
• The control flow graph of  ( 1 , . . .,   , ) is a path of length exactly 3(2 + 4) which begins at state() and ends at state( ′ ).• Every value of  and the value  ′ () is encoded in unary.
• There is exactly one run from  which can reach a configuration  such that  () ≥  ′ ().
Further, this run is of length exactly 3(2 + 4) and the configuration reached at the end of this run is  ′ itself.
Here the control flow graph of a Q + -VASS M is the graph obtained by taking the set of states of M as its vertices and connecting any two vertices by an edge if there is a rule between them.The reason behind calling such a Q + -VASS an amplifier is that starting from a configuration encoded in unary, it is able to reach a configuration encoded in binary.
Assuming the above theorem, we now reduce coverability in Q + -VASS RL over super-structured instances to coverability in Q + -VASS RL with unary encodings, which will prove the required NEXPTIME-hardness result.In the next subsection, we will prove the Q + -VASS Amplifiers theorem, which will complete our overall proof.
Let ⟨M,   ,    , ℓ⟩ be a super-structured instance of coverability in Q + -VASS RL with counters  1 , . . .,   .(Note that   and    are encoded in binary).Without loss of generality, we can assume that there are no incoming rules (resp.outgoing rules) to (resp.from) state(  ) (resp.state(   )).By definition of a super-structured instance, the denominator of each value in   and    is a power of 2. By multiplying each numerator with a sufficient power of 2, we may assume that the denominator of each value in the initial and final configurations is the same 2  for some number . (Note that this is a polynomial-time transformation  described the same number of bits (say ).Hence, we have 2 numerators each of which is less than 2  and each of which are describable by exactly  bits.
The construction.Let  1 ,  2 ,  3 ,  4 ,  5 ,  6 be the states of   ,    ,   ,    ,   ,    respectively.We now construct a Q + -VASS C as given in Figure 11.We use the following shorthand notation in that figure as follows: Between the states  2 and  3 we have a rule labelled by   (, , ).This is shorthand for  + 2 rules of the form where  1 , . . .,  +1 are new states.Similarly, from  6 to  we have a rule labelled by   (, , ).This is also shorthand for  + 2 rules of the same kind as before, obtained by introducing fresh states  ′ 1 , . . .,  ′ +1 and the following rules, Note that in the first set of rules the   's were incremented, whereas in the second set they are decremented.
Note that the set of counters of C is the union of the set of counters of M  , M and M  which we will respectively denote by y, x, z respectively.
Observations about C. Before we state the initial and the final configurations of our reduction and prove its correctness, let us observe some facts about the constructed Q + -VASS C.
Fact 1: Let  be any configuration of C whose state is  1 such that  () =   () for every counter  ∈ y.Let  ′ be the configuration of C whose state is  2 and whose counter values are the same as that of , except that  ′ () =    () for every counter  ∈ y.Then, there is a run from  to  ′ of length exactly 3(2 + 4).Further, any run from  to a configuration  ′′ such that  ′′ () ≥  ′ () must necessarily visit  ′ after exactly 3(2 + 4) steps.
Proof of Fact 1: This is simply the assumption on the amplifier ⟨M  ,   ,    ⟩ recast in terms of the Q + -VASS C. By construction of C and by the fact that ⟨M  ,   ,    ⟩ is a Q + -VASS amplifier, it follows that there is a run from  to  ′ of length exactly 3(2 + 4) in C. Further, suppose there is a run from  to some configuration  ′′ such that  ′′ () ≥  ′ ().Since the counter  is never affected after leaving the state  2 to fire the rules corresponding to   (, , ), we can assume that the supposed run from  to  ′′ consists only of those rules belonging to M  in C. By definition of an amplifier, the required claim then follows.
Proof of Fact 2: By construction of the rules of the gadget   (, , ), it follows that there is a run from  to  ′ of length exactly  + 2. Further, suppose there is a run from  to some configuration  ′′ such that  ′′ (  ) ≥ 0 and  ′′ (  ) ≥ 1 for every 1 ≤  ≤  + 2. By definition of an amplifier, it follows that the only outgoing rules from  2 in C are the edges corresponding to the rules in   (, , ).By assumption on the configuration , we have  (  ) =   /2  for every 1 ≤  ≤  and  (  ) = 1 −  (  ) for every 1 ≤  ≤  + 2. Since the counters {  ,   : 1 ≤  ≤  + 2} are never affected after the rules in   (, , ), we can assume that supposed run from  to  ′′ consists only of those rules in   (, , ).By construction of the rules in   (, , ) and by the definition of  and  ′ , it follows that this run must necessarily visit  ′ after exactly  + 2 steps.
Fact 3: Let  be any configuration of C whose state is  5 such that  () =   () for every counter  ∈ z.Let  ′ be the configuration of C whose state is  6 and whose counter values are the same as that of , except that  ′ () =    () for every counter  ∈ z.Then there is a run from  to  ′ of length exactly 3(2 + 4).Further, any run from  to a configuration  ′′ such that  ′′ () ≥  ′ () must necessarily visit  ′ after exactly 3(2 + 4) steps.
Proof of Fact 3: The proof is similar to the proof of Fact 1.
Fact 4: Let  be any configuration of C whose state is  6 such that  () =    () for every counter  ∈ z.If  () ≥    () for every  ∈ x, there is a run from  to  ′ of length exactly  + 2 where  ′ is the configuration of C whose state is  and whose counter values are the same as that of , except that  ′ (  ) = 0 and  ′ (  ) = 1 for every 1 ≤  ≤  + 2 and  ′ (  ) =  (  ) −   /2  for every 1 ≤  ≤ .Further, any run from  to a configuration  ′′ such that  ′′ (  ) ≥ 0 and  ′′ (  ) ≥ 1 for every 1 ≤  ≤  + 2 must necessarily mean that  () ≥    () for every  ∈ x and also that the run must necessarily visit  ′ after exactly  + 2 steps.
Proof of Fact 4: The proof is similar to the proof of Fact 2.
Proof.Suppose   can reach a configuration  which covers    in exactly ℓ steps in M by a run .First let us define the following configurations: •  2 is the same configuration as   except that its state is  2 and  2 () =    () for every  ∈ y.
By Facts 1, 2, 3 and 4, we have that   can reach  2 in exactly 3(2 + 4) steps,  2 can reach  3 in exactly  + 2 steps,  5 can reach  6 in exactly 3(2 + 4) steps and  6 can reach  7 in exactly  + 2 steps.Further, from the construction it is clear that  4 can reach  5 in exactly one step and also that  7 covers    .By assumption there is a run  from   to  of exactly  steps in M. By construction of  3 and  4 it can be observed that the same run  is also a run from  3 to  4 in C. It then follows that we have a run from   to  7 of length exactly ℓ  .
Suppose   can reach a configuration  which covers    in exactly ℓ C steps in C by a run .First let us define the following configurations: •  2 is the same configuration as   except that its state is  2 and  2 () =    () for every  ∈ y. •  3 is the same configuration as  2 except that its state is  3 and  3 (  ) = 0,  3 (  ) = 1 for every 1 ≤  ≤  + 2 and  3 (  ) =   (  ) for every 1 ≤  ≤ .•  4 is the same configuration as  3 except that its state is  4 and  4 (  ) =  (  ) +   /2  for every 1 ≤  ≤ .•  5 is the same configuration as  4 except that its state is  5 .
•  6 is the same configuration as  5 except that its state is  6 and  6 () =    () for every  ∈ y.
By Facts 1 and 2, it must be the case that after 3(2 + 4) steps of ,  2 is reached and after  + 2 steps from there,  3 is reached.Now let us look at the first points in the run  when the state  4 is reached and let us call the configuration at that point as  4 .By assumption on M, there are no outgoing rules from  4 in M and so the only way to move out of  4 in C is to take the rule to  5 to reach a configuration  5 whose counter values are the same as that of  4 .By construction of C, no counter in z is affected before reaching the state  5 and so this means that  5 () =   () =   () for any  ∈ z.Since  () ≥    () =    () and since we have a run from  5 to  , by Fact 3, it must be the case that starting from  5 , after 3(2 + 4) steps of , we reach a configuration  6 which is the same as  5 except that its state is  6 and  6 () =    () for any  ∈ z.By Fact 4, it must be the case that starting from  6 , after  + 2 steps of , we reach a configuration  7 whose state is  and whose counter values are the same as that of  6 , except that  7 (  ) = 0 and  7 (  ) = 1 for every 1 ≤  ≤  + 2 and  7 (  ) =  6 (  ) −   /2  for every 1 ≤  ≤ .Since there are no outgoing rules from  in C, it follows that  7 =  .This in turn implies that  6 =  6 ,  5 =  5 and  4 =  4 .This further implies that the path from  3 to  4 in  is a path of length exactly ℓ consisting only of those rules from M. By definition of  3 and  4 , this run is also a run of length exactly ℓ in M from the configuration   to the configuration  whose state is  4 and  (  ) =  (  ) +   /2  for every .Since  covers    , it follows that there is a run of length exactly ℓ in M from   which covers    .□ Finally, we make an observation on the number of counters that we have used for the reduction.Inspecting the arguments given in Stages 1 and 2, we already see that coverability in Q + -VASS RL , Vol. 1, No. 1, Article .Publication date: November 2018.
for super-structured instances is already NEXPTIME-hard for 15 counters.The reduction given in Stage 3 gives us an instance of coverability in Q + -VASS RL over unary encodings with  + 2(2 + 5) counters, where  is the number of counters in the given super-structured instance of coverability in Q + -VASS RL .Assuming Theorem D.3, it then follows that Theorem D.5.Coverability in Q + -VASS RL over unary encodings is NEXPTIME-hard, already over Q + -VASSes of dimension 85.
Since the reduction from Q + -VASS RL to Q + -PVASS given in Subsection 6.1 preserves coverability and produces a Q + -PVASS of bounded stack-height, we get Theorem 7.2.
All that remains is to prove Theorem D.3, i.e., construct Q + -VASS amplifiers.Let  1 ,  2 , . . .,   be natural numbers, all of them encoded in binary using exactly  bits and let  be a number such that each   ≤ 2  .Let p be the vector ( 1 , . . .,   ).We want to construct a Q + -VASS amplifier for (p, ) in time polynomial in ,  and .We do this in the following way.Instead of constructing a Q + -VASS which is an amplifier for (p, ), we first construct a [0, 1]-VASS 0?
RL which is an amplifier for (p, ).Then we apply the reduction from Section 6 on this [0, 1]-VASS 0?
RL amplifier and show that the resulting Q + -VASS is also an amplifier for (p, ).
To do this, we first introduce gadgets which can do basic operations on counters such as addition, doubling and halving.We have already seen all of these gadgets as part of the reduction given in Section 5.
RL 's given in Figures 12, 13, 14, which we call the addition, subtraction, doubling and halving gadgets respectively.In these gadgets ,  and  are some three counters.The addition and subtraction gadgets are parameterized by two counters  and  and the doubling and halving gadgets are parameterized by a single counter .The addition and the doubling gadgets are the same as the ones that appeared as gadgets for the inc and double rules respectively in Section 5.The halving gadget is a simplification of the initialization gadget used in Section 5.
A configuration  of any one of these gadgets M  is said to be good if • M  is the addition gadget and  () +  () ≤ 1.
• M  is the halving gadget and  is any configuration.
The following lemma is immediate from the construction of these gadgets.
Lemma D.6 (The gadget lemma).Suppose  is a good configuration of one of these gadgets M  such that state() =  and  () = 0.Then, there is exactly one run of length two in any of these gadgets from  which reaches a configuration  ′ which is the same as  except that state( ′ ) =  ′ and •  ′ () =  () +  () if M  is the addition gadget.

Fig. 3 .
Fig. 3. Illustration of Lemma 3.5.A derivation of the original grammar (shown on the left) is transformed into a derivation of the new grammar (on the right).
is a finite set of transitions and Δ ⊆  ×  ×  is a finite set of rules.A configuration of C is a tuple  = (, v) where  ∈  is the current state of C and v ∈ [0, 1]  is the vector representing the current values of the counters of C. We use the notations state(), val(),  () to denote , v, v(), respectively.Let  = (, ,  ′ ) ∈ Δ be a rule with  = (, ) and let  ∈ (0, 1].A step from a configuration  to another configuration  ′ via the pair (,  ) (denoted by   − − →  ′ ) is possible if and only if state() = , state( ′ ) =  ′ and val( ′ ) = val() +  and val( ′ ) () = 0 for all  ∈
The initial state is  0 and the final state is  2 .