skip to main content
research-article
Open Access

Risk of Stochastic Systems for Temporal Logic Specifications

Published:19 April 2023Publication History

Skip Abstract Section

Abstract

The wide availability of data coupled with the computational advances in artificial intelligence and machine learning promise to enable many future technologies such as autonomous driving. While there has been a variety of successful demonstrations of these technologies, critical system failures have repeatedly been reported. Even if rare, such system failures pose a serious barrier to adoption without a rigorous risk assessment. This article presents a framework for the systematic and rigorous risk verification of systems. We consider a wide range of system specifications formulated in signal temporal logic (STL) and model the system as a stochastic process, permitting discrete-time and continuous-time stochastic processes. We then define the STL robustness risk as the risk of lacking robustness against failure. This definition is motivated as system failures are often caused by missing robustness to modeling errors, system disturbances, and distribution shifts in the underlying data generating process. Within the definition, we permit general classes of risk measures and focus on tail risk measures such as the value-at-risk and the conditional value-at-risk. While the STL robustness risk is in general hard to compute, we propose the approximate STL robustness risk as a more tractable notion that upper bounds the STL robustness risk. We show how the approximate STL robustness risk can accurately be estimated from system trajectory data. For discrete-time stochastic processes, we show under which conditions the approximate STL robustness risk can even be computed exactly. We illustrate our verification algorithm in the autonomous driving simulator CARLA and show how a least risky controller can be selected among four neural network lane-keeping controllers for five meaningful system specifications.

Skip 1INTRODUCTION Section

1 INTRODUCTION

Over the next decade, large amounts of data will be generated and stored as devices that perceive and control the world become more affordable and available. Impressive demonstrations of data-driven and machine learning-enabled technologies exist already today, e.g., robotic manipulation [44], solving games [55, 74], and autonomous driving [19]. However, occasionally occurring system failures impede the use of these technologies particularly when system safety is a concern. For instance, neural networks, frequently used for perception and control in autonomous systems, are known to be fragile and non-robust [25, 77]. Especially the problem of long tails in training data distributions poses challenges, e.g., natural variations in weather and lighting conditions [61].

Moving forward, we expect that system failures appear less frequently due to advancing technologies—nonetheless, algorithms for the systematic and rigorous risk verification of such systems is needed. For instance, the National Transportation Safety Board emphasized in a statement in connection with an Uber accident from 2018 “the need for safety risk management requirements for testing automated vehicles on public roads” [12]. In this article, we show how to reason about the risk of systems that are modeled as stochastic processes. We consider a wide range of system specifications formulated in signal temporal logic (STL) [9, 51] and present a systematic way to quantify and compute the risk of a system lacking robustness against failure.

1.1 Related Work

Depending on the research disciplines and applications, risk can have various interpretations. While risk is often defined as a failure probability, it can also be understood in more general terms as a metric defined over a cost distribution, e.g., the expected value or the variance of a distribution. We focus on tail risk measures to capture the rare yet costly events of a distribution. In particular, we consider the value-at-risk (VaR), i.e., quantiles of a distribution, and the conditional value-at-risk (CVaR) [62, 63], i.e., the expected value over a quantile. Tail risk measures are more frequently being used in robotics and control applications where system safety is important [50].

Risk in control. Control design under risk objectives and constraints is increasingly been studied among control theorists, as machine learning components integrated into closed-loop systems cause stochastic system uncertainty. Oftentimes, the CVaR risk measure is used to capture risk due its convexity and the property of being an upper bound to the VaR. For instance, the authors in Reference [72] consider a stochastic optimal control problem with CVaR constraints over the distance to obstacles. Linear quadratic control under risk constraints was considered in Reference [81] to trade off risk and mean performance. A similar idea is followed for the risk constrained minimum mean squared error estimator in Reference [37]. Risk-aware model predictive control was considered in References [29, 76], while References [17, 73] present data-driven and distributionally robust model predictive controllers. Risk-aware control barrier functions for safe control synthesis were proposed in Reference [2], while Reference [58] demonstrates the use of risk in sampling-based planning. We remark that we view these works to be orthogonal to our article, as we provide a data-driven framework for the risk assessment under complex temporal logic specifications, and we hope to inform future control design strategies.

Stochastic system verification. System verification has a long history in complementing and informing the control design process of systems, e.g., using model checking [6, 14]. When dealing with stochastic systems, system verification becomes computationally more challenging [40]. Statistical model checking has recently gained attention by relying on availability of data instead of computation [1, 22, 43, 85]. Another line of work considers stochastic barrier functions for safety verification of dynamical systems [32, 59]. The authors in References [33, 34] deal with the verification of stochastic dynamical systems during runtime. Motivated by the fragility and sensitivity of neural networks [25, 77], a special focus has recently been on verifying neural networks in open-loop [38, 75] and closed-loop [30]. We remark that our algorithms presented in this article permit verification of general classes of systems, including systems with neural networks, as long as we can obtain data, e.g., from a simulator. The guarantees obtained in these previous works are either worst-case guarantees or in terms of failure probabilities. Towards incorporating tail risk measures, the authors in References [15, 16] propose a risk-aware safety analysis framework using the CVaR. We are instead interested in system verification under more complex temporal logic specifications and risk.

Temporal logics. We use signal temporal logic to express a wide range of system specifications, e.g., surveillance (“visit regions A, B, and C every \(10\!-\!60\) sec”), safety (“always between \(5\!-\!25\) sec stay at least 1 m away from region D”), and many others. For deterministic signals, STL allows to calculate the robustness by which a signal satisfies an STL specification. Particularly, the authors in Reference [21] proposed the robustness degree as the maximal tube around a signal in which all signals satisfy the specification. The size of the tube consequently measures the robustness of this signal with respect to the specification. As the robustness degree is in general hard to calculate, the authors in Reference [21] proposed approximate yet easier to calculate robust semantics. Many forms of robust semantics have appeared, such as space and time robustness [18], the arithmetic-geometric mean robustness [53], the smooth cumulative robustness [28], averaged STL [3], and Reference [64], in which a connection with linear time-invariant filtering is established allowing to define various types of robust semantics.

For stochastic signals, the authors in References [35, 41, 45, 67, 80] propose notions of probabilistic signal temporal logic in which chance constraints over predicates are considered, while the Boolean and temporal operators of STL are not changed. Similarly, notions of risk signal temporal logic have recently appeared in References [46, 48, 69] by defining risk constraints over predicates while not changing the definitions of Boolean and temporal operators. In this article, we instead define risk over the whole STL specification. The work in Reference [23] considers the probability of an STL specification being satisfied instead of using chance or risk constraints over predicates. The authors in Reference [84] consider hyperproperties in STL, i.e., properties between multiple system executions. More with a control synthesis focus and for the less-expressive formalism of linear temporal logic, the authors in References [10, 42, 82] consider control over belief spaces, while the authors in Reference [27] consider probabilistic satisfaction over Markov decision processes. Complementary to these works, References [5, 60] propose techniques to infer STL specifications from data towards explaining the underlying data.

Risk verification with temporal logics. In this article, we quantify and compute the risk of lacking robustness against failure. We argue that the consideration of robustness in system verification is crucial and are particularly motivated by the fact that system failures are often caused by missing robustness to modeling errors, system disturbances, and distribution shifts in the underlying data generating process. The authors in Reference [4] further highlight the importance of robustness in system verification. Probably closest to our article are the works in References [31, 70, 71] and References [7, 8]. In References [31, 70, 71], the authors combine data-driven and model-based verification techniques to obtain information about the satisfaction probability of a partially known system. The authors in References [7, 8] present a purely data-driven verification technique to estimate probabilities over robustness distributions of the system. Conceptually, our work differs in two directions. First, we consider general risk measures to be able to focus on the tails of the robustness distribution. We also show how to estimate the robustness risk from data with high confidence. Second, we use the robustness degree as defined in Reference [21] to obtain robustness distributions. This in fact allows us to obtain a precise geometric interpretation of risk. This article is based on our previous work [47]. We here permit continuous-time stochastic processes and the CVaR as a risk measure. We also show under which conditions the STL robustness risk can exactly be calculated, while presenting exhaustive simulations within the autonomous driving simulator CARLA [19].

1.2 Contributions and Article Outline

Our general goal is to analyze the robustness of stochastic processes and to quantify and compute the risk of a system lacking robustness against system failure. We make the following contributions:

We consider discrete-time and continuous-time stochastic processes and show under which conditions the robust semantics and the robustness degree of STL are random variables. This enables us to define risk over these quantities.

We define the STL robustness risk as the risk of a system lacking robustness against failure of an STL specification. The definition permits general classes of risk measures and has a precise geometric interpretation in terms of the size of permissible disturbances. We also define the approximate STL robustness risk as a computationally tractable upper bound of the STL robustness risk.

For the VaR and the CVaR, we show how the approximate STL robustness risk can be estimated from system trajectory data. Importantly, no particular restriction on the distribution of the stochastic process has to be made. For discrete-time stochastic processes with a discrete state space, we show how the approximate STL robustness risk can even be computed exactly.

We estimate the risk of four neural network lane-keeping controllers within the autonomous driving simulator CARLA. We show how to find the least risky controller.

In Section 2, we present background on signal temporal logic, stochastic processes, and risk measures. In Section 3, we define the STL robustness risk and the STL approximate robustness risk. Section 4 shows how the approximate STL robustness risk can be estimated from data, while Section 5 shows under which conditions it can be computed exactly. The simulation results within CARLA are presented in Section 6 followed by conclusions in Section 7.

Skip 2BACKGROUND Section

2 BACKGROUND

We first provide background on signal temporal logic, stochastic processes, and risk measures.

2.1 Signal Temporal Logic

Signal temporal logic (STL) is based on deterministic signals \(x:T\rightarrow \mathbb {R}^n\) where \(T\) denotes the time domain [51]. We particularly consider continuous time \(T:=\mathbb {R}\) (the set of real numbers) and discrete time \(T:=\mathbb {Z}\) (the set of natural numbers). The atomic elements of STL are predicates that are functions \(\mu :\mathbb {R}^n\rightarrow \mathbb {B}\) where \(\mathbb {B}:=\lbrace \top ,\bot \rbrace\) is the set of Booleans consisting of the true and false elements \(\top :=1\) and \(\bot :=-1\), respectively. Let us associate an observation map \(O^\mu \subseteq \mathbb {R}^n\) with a predicate \(\mu\) that indicates regions within the state space where the predicate \(\mu\) is true, i.e., \(\begin{align*} O^\mu :=\mu ^{-1}(\top), \end{align*}\) where \(\mu ^{-1}(\top)\) denotes the inverse image of \(\top\) under the function \(\mu\). We assume throughout the article that the sets \(O^\mu\) and \(O^{\lnot \mu }\) are non-empty and measurable, which is a mild technical assumption. In other words, the sets \(O^\mu\) and \(O^{\lnot \mu }\) are elements of the Borel \(\sigma\)-algebra \(\mathcal {B}^n\) of \(\mathbb {R}^n\).

Remark 1.

For convenience, the predicate \(\mu\) is often defined via a predicate function \(h:\mathbb {R}^n\rightarrow \mathbb {R}\) as \(\begin{align*} \mu (\zeta):={\left\lbrace \begin{array}{ll}\top & \text{if } h(\zeta)\ge 0\\ \bot &\text{otherwise} \end{array}\right.} \end{align*}\) for \(\zeta \in \mathbb {R}^n\). In this case, we have \(O^\mu =\lbrace \zeta \in \mathbb {R}^n|h(\zeta)\ge 0\rbrace\).

The syntax of STL, which recursively allows to formulate system specifications, is defined as (1) \(\begin{align} \phi \; ::= \; \top \; | \; \mu \; | \; \lnot \phi \; | \; \phi ^{\prime } \wedge \phi ^{\prime \prime } \; | \; \phi ^{\prime } U_I \phi ^{\prime \prime } \; | \; \phi ^{\prime } \underline{U}_I \phi ^{\prime \prime }, \, \end{align}\) where \(\phi ^{\prime }\) and \(\phi ^{\prime \prime }\) are STL formulas and where \(U_I\) is the future until operator with time interval \(I\subseteq \mathbb {R}_{\ge 0}\), while \(\underline{U}_I\) is the past until-operator. The Boolean operators \(\lnot\) and \(\wedge\) encode negations and conjunctions, respectively. We say that an STL formula \(\phi\) as in Equation (1) is bounded if the time interval \(I\) is restricted to be compact. Based on these elementary operators, we can define the set of operators \(\begin{align*} \phi ^{\prime } \vee \phi ^{\prime \prime }&:=\lnot (\lnot \phi ^{\prime } \wedge \lnot \phi ^{\prime \prime }) &\text{ (disjunction operator)},\\ F_I\phi &:=\top U_I \phi &\text{ (future eventually operator)},\\ \underline{F}_I\phi &:=\top \underline{U}_I \phi &\text{ (past eventually operator)},\\ G_I\phi &:=\lnot F_I\lnot \phi &\text{ (future always operator)},\\ \underline{G}_I\phi &:=\lnot \underline{F}_I\lnot \phi &\text{ (past always operator).} \end{align*}\)

2.1.1 Semantics.

To determine whether or not a signal \(x:T\rightarrow \mathbb {R}^n\) satisfies an STL formula \(\phi\), we define the semantics of \(\phi\) by means of the satisfaction function \(\beta ^\phi :\mathfrak {F}(T,\mathbb {R}^n)\times T \rightarrow \mathbb {B}\).1 In particular, \(\beta ^\phi (x,t)=\top\) indicates that the signal \(x\) satisfies the formula \(\phi\) at time \(t\), while \(\beta ^\phi (x,t)=\bot\) indicates that \(x\) does not satisfy \(\phi\) at time \(t\). While the intuitive meanings of the Boolean operators \(\lnot\) (“not”), \(\wedge\) (“and”), and \(\vee\) (“or”) are clear, we note that the future until operator \(\phi ^{\prime } {U}_I \phi ^{\prime \prime }\) encodes that \(\phi ^{\prime }\) holds until \(\phi ^{\prime \prime }\) holds. Specifically, \(\beta ^{\phi ^{\prime } {U}_I \phi ^{\prime \prime }}(x,t)=\top\) means that \(\phi ^{\prime }\) holds for all times after \(t\) (not necessarily at time \(t\)) until \(\phi ^{\prime \prime }\) holds within the time interval \((t\oplus I)\cap T\).2 Similarly, \(\beta ^{F_I \phi }(x,t)=\top\) encodes that \(\phi\) holds eventually within \((t\oplus I)\cap T\), while \(\beta ^{G_I \phi }(x,t)=\top\) encodes that \(\phi\) holds always within \((t\oplus I)\cap T\). For a formal definition of \(\beta ^\phi (x,t)\), we refer to Appendix A.

We are usually interested in the satisfaction function \(\beta ^\phi (x,0)\), which determines the satisfaction of \(\phi\) by \(x\) at time zero, the time at which we assume \(\phi\) to be enabled. An STL formula \(\phi\) is hence said to be satisfiable if \(\exists x\in \mathfrak {F}(T,\mathbb {R}^n)\) such that \(\beta ^\phi (x,0)=\top\). The following example is taken from Lindemann et al. [47] and used as a running example throughout the article:

Example 1.

Consider a delivery robot that needs to perform two time-critical delivery tasks in regions \(A\) and \(B\) sequentially while avoiding areas \(C\) and \(D\); see Figure 2. We consider the STL formula (2) \(\begin{align} \phi :=G_{[0,3]}(\lnot \mu _{C} \wedge \lnot \mu _{D}) \wedge F_{[1,2]}(\mu _{A} \wedge F_{[0,1]}\mu _{B}), \end{align}\) where the regions \(A\), \(B\), \(C\), and \(D\) are encoded by the predicates \(\mu _A\), \(\mu _B\), \(\mu _C\), and \(\mu _D\), respectively, that are defined below. Let the state \(x(t)\in \mathbb {R}^{10}\) of the system at time \(t\) be \(\begin{align*} x(t):=[\begin{matrix}r(t) & a & b& c& d \end{matrix}]^T \end{align*}\) where \(r(t)\) is the robot position at time \(t\) and where \(a\), \(b\), \(c\), and \(d\) denote the center points of the regions \(A\), \(B\), \(C\), and \(D\) that are defined as \(\begin{align*} a:=[\begin{matrix} 4 & 5 \end{matrix}]^T b:=[\begin{matrix} 7 & 2 \end{matrix}]^T c:=[\begin{matrix} 2 & 3 \end{matrix}]^T d:=[\begin{matrix} 6 & 4 \end{matrix}].^T \end{align*}\)

The predicates \(\mu _A\), \(\mu _B\), \(\mu _C\), and \(\mu _D\) are now defined by their observation maps

(3)
(4)
where \(\Vert \cdot \Vert _2\) is the Euclidean and \(\Vert \cdot \Vert _\infty\) is the infinity norm. In Figure 2, six different robot trajectories \(r_1\)-\(r_6\) are shown. It can be seen that the signal \(x_1\) that corresponds to \(r_1\) violates \(\phi\), while \(x_2\)-\(x_6\) satisfy \(\phi\), i.e., we have \(\beta ^\phi (x_1,0)=\bot\) and \(\beta ^\phi (x_j,0)=\top\) for all \(j\in \lbrace 2,... ,6\rbrace\).

Remark 2.

The operators \({U}_I\) and \(\underline{U}_I\) are the strict non-matching versions of the until operators. In particular, \(\phi ^{\prime } {U}_I \phi ^{\prime \prime }\) is: (1) strict, as it does not require \(\phi ^{\prime }\) to hold at the current time \(t\), and (2) non-matching, as it does not require that \(\phi ^{\prime }\) and \(\phi ^{\prime \prime }\) have to hold at the same time. When dealing with continuous-time stochastic systems later in this article, we replace the strict non-matching versions \({U}_I\) and \(\underline{U}_I\) by the non-strict matching versions that we denote by \(\vec{U}_I\) and \(\vec{\underline{U}}_I\); see Appendix A for their formal definitions. We note that STL with until operators \({U}_I\) and \(\underline{U}_I\) is more expressive than STL with \(\vec{U}_I\) and \(\vec{\underline{U}}_I\). When excluding Zeno-signals, there is, however, no difference between these two notions [24]. As one rarely encounters Zeno-signals, we argue that the restriction to the non-strict matching version of the until operator for continuous-time stochastic processes is not restrictive in practice.

2.1.2 Robustness Degree.

Importantly, one may also be interested in the quality of satisfaction and additionally ask how robustly the signal \(x\) satisfies the STL formula \(\phi\) at time \(t\). To answer this question, the authors in Fainekos and Pappas [21, Definition 7] define the robustness degree that we recall next in a slightly modified manner. If \(\beta ^\phi (x,t)=\top\), then the robustness degree quantifies how much the signal \(x\) can be perturbed by additive noise before changing the value of \(\beta ^\phi (x,t)\). Towards a formal definition, let us first define the set of signals that violate \(\phi\) at time \(t\) as

To measure distances between signals, let us define the metric \(\kappa :\mathfrak {F}(T,\mathbb {R}^n)\times \mathfrak {F}(T,\mathbb {R}^n)\rightarrow \overline{\mathbb {R}}_{\ge 0}\) as \(\begin{align*} \kappa (x,x^{\prime }):=\sup _{t\in T} d\big (x(t),x^{\prime }(t)\big), \end{align*}\)

where \(\overline{\mathbb {R}}_{\ge 0}:=\mathbb {R}_{\ge 0}\cup \lbrace \infty \rbrace\) is the set of nonnegative extended real numbers and where \(d:\mathbb {R}^n\times \mathbb {R}^n\rightarrow \mathbb {\overline{}}{\mathbb {R}}_{\ge 0}\) is a metric assigning a distance in \(\mathbb {R}^n\), e.g., the Euclidean norm. Throughout the article, we use the extended definitions of the supremum and infimum operators, e.g., \(\sup \mathbb {R}= \infty\). Note that \(\kappa (x,x^{\prime })\) is the \(L_\infty\) norm of the signal \(x-x^{\prime }\) and measures the distance between the signals \(x\) and \(x^{\prime }\).

To set some general notation, for a metric space \((S,\kappa)\) with metric \(\kappa\), we denote by \(\begin{align*} \bar{\kappa }(x,S^{\prime }):=\inf _{x^{\prime }\in S^{\prime }} \kappa (x,x^{\prime }) \end{align*}\) the distance of a point \(x\in S\) to a nonempty set \(S^{\prime }\subseteq S\). Using this definition, the robustness degree \(\text{RD}^\phi :\mathfrak {F}(T,\mathbb {R}^n)\times T\rightarrow \overline{\mathbb {R}}_{\ge 0}\) is now defined via the metric \(\kappa\) as the distance of the signal \(x\) to the set of violating signals \(\mathcal {L}^{\lnot \phi }(t)\).

Definition 1

(Robustness Degree3).

For a signal \(x:T\rightarrow \mathbb {R}^n\) and an STL formula \(\phi\), the robustness degree \(\text{RD}^{\phi }(x,t)\) is defined as \(\begin{align*} \text{RD}^{\phi }(x,t):=\bar{\kappa }\big (x,\text{cl}(\mathcal {L}^{\lnot \phi }(t))\big), \end{align*}\)

where \(\text{cl}(\mathcal {L}^{\lnot \phi }(t))\) denotes the closure of the set \(\mathcal {L}^{\lnot \phi }(t)\).

By definition of the robustness degree, the following properties hold: If \(\text{RD}^{\phi }(x,t)\gt 0\), then \(\beta ^\phi (x,t)=\top\), i.e., the signal \(x\) satisfies \(\phi\) at time \(t\). It further follows that all signals \(x^{\prime }\in \mathfrak {F}(T,\mathbb {R}^n)\) with \(\kappa (x,x^{\prime })\lt \text{RD}^{\phi }(x,t)\) are such that \(\beta ^\phi (x^{\prime },t)=\top\). The robustness degree defines in fact a robust neighborhood, which is a set strictly containing \(x\), so for all \(x^{\prime }\) in this robust neighborhood we have \(\beta ^\phi (x,t)=\beta ^\phi (x^{\prime },t)\). Finally, note that \(\text{RD}^{\phi }(x,t)=0\) may imply either \(\beta ^\phi (x,t)=\top\) or \(\beta ^\phi (x,t)=\bot\), i.e., the signal \(x\) either satisfies or violates \(\phi\) at time \(t\).

2.1.3 Robust Semantics.

Note that it is in general difficult to calculate the robustness degree \(\text{RD}^{\phi }(x,t),\) as the set \(\mathcal {L}^{\lnot \phi }(t)\) is hard to calculate. The authors in Fainekos and Pappas [21] introduce the robust semantics \(\rho ^\phi :\mathfrak {F}(T,\mathbb {R}^n)\times T\rightarrow \overline{\mathbb {R}}\) as an alternative way of finding a robust neighborhood where \(\overline{\mathbb {R}}:=\mathbb {R}\cup \lbrace -\infty ,\infty \rbrace\) is, in direct analogy to \(\overline{\mathbb {R}}_{\ge 0}\), the set of extended real numbers.

Definition 2

(Robust Semantics).

For a signal \(x:T\rightarrow \mathbb {R}^n\) and an STL formula \(\phi\), the robust semantics \(\rho ^\phi (x,t)\) are recursively defined as \(\begin{align*} \rho ^{\top }(x,t)& := \infty ,\\ \rho ^{\mu }(x,t)& := {\left\lbrace \begin{array}{ll} \bar{d}\big (x(t),\text{cl}(O^{\lnot \mu })\big) &\text{if } x(t)\in O^{\mu }\\ -\bar{d}\big (x(t),\text{cl}(O^{\mu })\big) &\text{otherwise,} \end{array}\right.}\\ \rho ^{\lnot \phi }(x,t) &:= -\rho ^{\phi }(x,t),\\ \rho ^{\phi ^{\prime } \wedge \phi ^{\prime \prime }}(x,t) &:= \min (\rho ^{\phi ^{\prime }}(x,t),\rho ^{\phi ^{\prime \prime }}(x,t)),\\ \rho ^{\phi ^{\prime } U_I \phi ^{\prime \prime }}(x,t) &:= \underset{t^{\prime \prime }\in (t\oplus I)\cap T}{\text{sup}} \Big (\min \big (\rho ^{\phi ^{\prime \prime }}(x,t^{\prime \prime }),\underset{t^{\prime }\in (t,t^{\prime \prime })\cap T}{\text{inf}}\rho ^{\phi ^{\prime }}(x,t^{\prime }) \big)\Big), \\ \rho ^{\phi ^{\prime } \underline{U}_I \phi ^{\prime \prime }}(x,t) &:= \underset{t^{\prime \prime }\in (t\ominus I)\cap T}{\text{sup}} \Big (\min \big (\rho ^{\phi ^{\prime \prime }}(x,t^{\prime \prime }), \underset{t^{\prime }\in (t^{\prime \prime },t)\cap T}{\text{inf}}\rho ^{\phi ^{\prime }}(x,t^{\prime }) \big)\Big). \end{align*}\)

Remark 3.

With respect to Remark 2, the non-strict matching version of the until operators replace the open time intervals \((t,t^{\prime \prime })\) in Definition 2 by the closed time intervals \([t,t^{\prime \prime }]\) so \(\begin{align*} \rho ^{\phi ^{\prime } \vec{U}_I \phi ^{\prime \prime }}(x,t) &:= \underset{t^{\prime \prime }\in (t\oplus I)\cap T}{\text{sup}} \Big (\min \big (\rho ^{\phi ^{\prime \prime }}(x,t^{\prime \prime }),\underset{t^{\prime }\in [t,t^{\prime \prime }]\cap T}{\text{inf}}\rho ^{\phi ^{\prime }}(x,t^{\prime }) \big)\Big), \\ \rho ^{\phi ^{\prime } \vec{\underline{U}}_I \phi ^{\prime \prime }}(x,t) &:= \underset{t^{\prime \prime }\in (t\ominus I)\cap T}{\text{sup}} \Big (\min \big (\rho ^{\phi ^{\prime \prime }}(x,t^{\prime \prime }), \underset{t^{\prime }\in [t^{\prime \prime },t]\cap T}{\text{inf}}\rho ^{\phi ^{\prime }}(x,t^{\prime }) \big)\Big). \end{align*}\)

Importantly, by slight modification of Fainekos and Pappas [21, Theorem 28], we know that (5) \(\begin{align} \rho ^{\phi }(x,t)\le \text{RD}^{\phi }(x,t). \end{align}\) The robust semantics \(\rho ^\phi (x,t)\) hence provides a tractable under-approximation of the robustness degree \(\text{RD}^{\phi }(x,t)\). The robust semantics are sound in the sense that \(\beta ^\phi (x,t)=\top\) if \(\rho ^\phi (x,t)\gt 0\) and \(\beta ^\phi (x,t)=\bot\) if \(\rho ^\phi (x,t)\lt 0\) [21, Proposition 30].

Example 1

(continued).

Consider again the trajectories shown in Figure 2. We obtain \(\rho ^\phi (x_1,0)=-0.15\), \(\rho ^\phi (x_2,0)=0.01\), and \(\rho ^\phi (x_j,0)=0.25\) for all \(j\in \lbrace 3,... ,6\rbrace\). The reason for \(x_1\) having negative robustness lies in \(r_1\) intersecting with the region \(D\). Marginal robustness of \(x_2\) is explained as \(r_2\) only marginally avoids the region \(D,\) while all other trajectories avoid the region \(D\) robustly.

2.2 Random Variables and Stochastic Processes

Instead of interpreting an STL specifications \(\phi\) over deterministic signals, we will interpret \(\phi\) over stochastic processes. Consider, therefore, the probability space \((\Omega ,\mathcal {F},P),\) where \(\Omega\) is the sample space, \(\mathcal {F}\) is a \(\sigma\)-algebra of \(\Omega\), and \(P:\mathcal {F}\rightarrow [0,1]\) is a probability measure.

Let \(Z\) denote a real-valued random vector, i.e., a measurable function \(Z:\Omega \rightarrow \mathbb {R}^n\). When \(n=1\), we say \(Z\) is a random variable. We refer to \(Z(\omega)\) as a realization of the random vector \(Z\) where \(\omega \in \Omega\). Since \(Z\) is a measurable function, a probability space can be defined for \(Z\) so probabilities can be assigned to events related to values of \(Z\).4 Consequently, a cumulative distribution function (CDF) \(F_Z(z)\) can be defined for \(Z\). Given a random vector \(Z\), we can derive other random variables. Assume, for instance, a measurable function \(g: \mathbb {R}^n \rightarrow \mathbb {R}\), then \(g(Z(\omega))\) becomes a derived random variable, since function composition preserves measurability; see, e.g., Durrett [20] for more details.

A stochastic process is a function \(X:T\times \Omega \rightarrow \mathbb {R}^n\), where \(X(t,\cdot)\) is a random vector for each fixed \(t\in T\). A stochastic process can be viewed as a collection of random vectors \(\lbrace X(t,\cdot)|t\in T\rbrace\) that are defined on a common probability space \((\Omega ,\mathcal {F},P)\) and that are indexed by \(T\). For a fixed \(\omega \in \Omega\), the function \(X(\cdot ,\omega)\) is a realization of the stochastic process. Another interpretation is that a stochastic process is a collection of deterministic functions of time \(\lbrace X(\cdot ,\omega)|\omega \in \Omega \rbrace\) that are indexed by \(\Omega\).

2.3 Risk Measures

A risk measure is a function \(R:\mathfrak {F}(\Omega ,\mathbb {R})\rightarrow \mathbb {R}\) that maps from the set of real-valued random variables to the real numbers. In particular, we refer to the input of a risk measure \(R\) as the cost random variable, since typically a cost is associated with the input of \(R\). Risk measures hence allow for a risk assessment in terms of such cost random variables.

In this article, we particularly use the expected value, the value-at-risk \(VaR_\beta\), and the conditional value-at-risk \(CVaR_\beta\) at risk level \(\beta \in (0,1),\) which are commonly used risk measures; see Figure 3. The \(VaR_\beta\) of a random variable \(Z:\Omega \rightarrow \mathbb {R}\) is defined as

i.e., the right \(1-\beta\) quantile of \(Z\). The \(CVaR_\beta\) of \(Z\) is defined as
where \([Z-\alpha ]^+:=\max (Z-\alpha ,0)\). When the CDF \(F_Z\) of \(Z\) is continuous, it holds that \(CVaR_\beta (Z):=E(Z|Z\ge VaR_\beta (Z))\), i.e., \(CVaR_\beta (Z)\) is the expected value of \(Z\) conditioned on the events where \(Z\) is greater or equal than \(VaR_\beta (Z)\).

There are various desriable properties that a risk measure \(R\) may satisfy; see Majumdar and Pavone [50] for more information. We emphasize that our presented method is compatible with any monotone risk measure, where monotonicity of \(R\) is defined as follows:

For two cost random variables \(Z,Z^{\prime }\in \mathfrak {F}(\Omega ,\mathbb {R})\), the risk measure \(R\) is monotone if \(\begin{align*} Z(\omega) \le Z^{\prime }(\omega) \text{ for all } \omega \in \Omega \;\; \Rightarrow \;\; R(Z) \le R(Z^{\prime }). \end{align*}\)

The assumption of considering monotone risk measures is very mild, and both the value-at-risk \(VaR_\beta (Z)\) and the conditional value-at-risk \(CVaR_\beta (Z)\) as well as the expected value are monotone.

Skip 3THE RISK OF LACKING ROBUSTNESS AGAINST FAILURE Section

3 THE RISK OF LACKING ROBUSTNESS AGAINST FAILURE

We interpret STL formulas \(\phi\) over stochastic processes \(X\) instead of deterministic signals \(x\). It is, however, not immediately clear how to interpret the satisfaction of \(\phi\) by \(X\). One way is to argue about the probability of satisfaction; see, e.g., Farahani et al. [23], but probabilities provide no information about the risk and the robustness of \(X\) with respect to \(\phi\). In fact, some realizations of \(X\) may satisfy \(\phi\) robustly, while some other realizations of \(X\) may satisfy \(\phi\) only marginally or even violate \(\phi\). This observation leads us to the use of risk measures \(R\) to be able to argue about the risk of the stochastic process \(X\) lacking robustness against failure of the specification \(\phi\).

3.1 Measurability of Semantics, Robustness Degree, and Robust Semantics

To define the risk of a stochastic process \(X\), we first need to show under which conditions the semantics \(\beta ^\phi (X,t)\), the robustness degree \(\text{RD}^{\phi }(X,t)\), and the robust semantics \(\rho ^\phi (X,t)\) are derived random variables. For discrete-time stochastic processes, no assumptions have to be made.

Theorem 1.

Let \(X\) be a discrete-time stochastic process, i.e., \(T:=\mathbb {Z}\). Let \(\phi\) be an STL specification as in Equation (1). Then \(\beta ^\phi (X(\cdot ,\omega),t)\), \(\text{RD}^\phi (X(\cdot ,\omega),t)\), and \(\rho ^\phi (X(\cdot ,\omega),t)\) are measurable in \(\omega\) for a fixed \(t\in T\), i.e., \(\beta ^\phi (X,t)\), \(\text{RD}^\phi (X,t)\), and \(\rho ^\phi (X,t)\) are random variables.

For continuous-time stochastic processes, however, we have to impose additional technical assumptions. Particularly, we have to restrict the class of STL formulas in Equation (1) and make further assumptions on the stochastic process \(X\).

Theorem 2.

Let \(X\) be a continuous-time stochastic process, i.e., \(T:=\mathbb {R}\). Let \(\phi\) be a bounded STL specification as in Equation (1), but where the strict non-matching until operators \({U}_I\) and \({\underline{U}}_I\) are replaced with the non-strict matching until operators \(\vec{U}_I\) and \(\vec{\underline{U}}_I\). Then \(\beta ^\phi (X(\cdot ,\omega),t)\) is measurable in \(\omega\) for a fixed \(t\in T\), i.e., \(\beta ^\phi (X,t)\) is a random variable. If \(X(\cdot ,\omega):\Omega \rightarrow \mathfrak {F}(T,\mathbb {R}^n)\) is measurable,5 then \(\text{RD}^\phi (X(\cdot ,\omega),t)\) is measurable in \(\omega\) for a fixed \(t\in T\), i.e., \(\text{RD}^\phi (X,t)\) is a random variable, and if additionally \(X(\cdot ,\omega)\) is a cadlag function6 for each \(\omega \in \Omega\), then \(\rho ^\phi (X(\cdot ,\omega),t)\) is measurable in \(\omega\) for a fixed \(t\in T\), i.e., \(\rho ^\phi (X,t)\) is a random variable.7

Consequently, the probabilities \(P(\beta ^{\phi }(X,t)\in B)\), \(P(\rho ^{\phi }(X,t)\in B)\), and \(P(\text{RD}^{\phi }(X,t)\in B)\)8 are well defined for measurable sets \(B\) from the corresponding measurable spaces. This enables us to define the STL robustness risk in the next section.

Remark 4.

We first note that the assumption of a bounded STL formula \(\phi\) with the non-strict matching until operator is made for a technical reason. While the restriction to bounded formulas limits our expressivity to finite time specifications, the consideration of the non-strict matching until operator is not restrictive, as discussed in Remark 2. We remark that Bartocci et al. [8] showed measurability of \(\rho ^\phi (X(\cdot ,\omega),t)\) under the assumption of a bounded STL specification \(\phi\) with non-strict matching until operators, while we additionally show measurability of the semantics \(\beta ^\phi (X(\cdot ,\omega),t)\) and the robustness degree \(\text{RD}^\phi (X(\cdot ,\omega),t)\) without any additional continuity assumptions on \(X\). Last, we recall that we do not need to assume that \(\phi\) is bounded for a discrete-time stochastic process as per Theorem 1.

3.2 The STL Robustness Risk

One way of defining the risk associated with a stochastic process \(X\) is to consider the satisfaction function \(\beta ^\phi (X,t)\). However, not much information about the robustness of \(X\) can be inferred due to binary encoding of \(\beta ^\phi (X,t)\). Instead, we consider the risk of the stochastic process \(X\) lacking robustness against failure of the specification \(\phi\) by considering the robustness degree \(\text{RD}^{\phi }(X,t)\).

Example 2.

Consider an electric RC circuit consisting of a resistor with resistance \(\mathcal {R}\) and a capacitor with capacitance \(\mathcal {C}:=1\). If the capacitor is initially charged with \(V_0:=5\), then the capacitor discharges its energy over time once the circuit is closed. In fact, the voltage over the capacitor is described by \(\begin{align*} V(t)=V_0\exp (-\tau t), \end{align*}\) where \(\tau :=1/\mathcal {R}\mathcal {C}\) is the time constant. Assume that the resistance is unknown and modeled as \(\mathcal {R}:=0.5+Z,\) where \(Z\) is a random variable following a beta distribution with probability density function \(f_Z(z):=\frac{1}{B(1.5,5)} z^{1.5-1} (1-z)^{5-1}\), where \(B(1.5,5)\) is the beta function with parameters 1.5 and 5. Consequently, the voltage \(V\) becomes a stochastic process of which we plot 200 realizations in Figure 4 (left). As a specification \(\phi\), we want that the voltage \(V(t)\) drops below 1 after 2 s, i.e., \(\begin{align*} \phi :=G_{[2,\infty)}(V\le 1). \end{align*}\) In Figure 4 (right), we show the histogram of the negative robustness degree \(-\text{RD}^{\phi }(V,0)\) for \(100,\!000\) realizations. To estimate the risk of the stochastic process \(X\) lacking robustness against failure of \(\phi\), we can now compose \(-\text{RD}^{\phi }(V,0)\) with a risk measure \(R\). For instance, the value-at-risk at level \(\beta :=0.9\) is \(VaR_{0.9}(-\text{RD}^{\phi }(V,0))\approx -0.38\). Recall that \(VaR_{0.9}(-\text{RD}^{\phi }(V,0))\) is the 0.1 quantile of \(-\text{RD}^{\phi }(V,0)\). This means that with a probability of at least 0.9 the robustness degree is not smaller (i.e., greater) than \(|VaR_{0.9}(-\text{RD}^{\phi }(V,0))|\approx 0.38\) or, in other words, that in at most 10% of the cases the robustness is smaller than 0.38. This information is useful, as it allows us to quantify how much uncertainty our system can handle, e.g., when we do not know the value of \(V_0\) exactly.

The previous example motivates the following definition for the risk of the stochastic process \(X\) lacking robustness against failure of \(\phi\) to which we refer as the STL robustness risk for brevity.

Definition 3

(STL Robustness Risk).

Given an STL formula \(\phi\) and a stochastic process \(X:T\times \Omega \rightarrow \mathbb {R}^n\), the risk of \(X\) lacking robustness against failure of \(\phi\) at time \(t\) is defined as \(\begin{align*} R(-\text{RD}^{\phi }(X,t)). \end{align*}\)

We remark that a large positive value of \(\text{RD}^{\phi }(X(\cdot ,\omega),t)\) for a realization \(\omega \in \Omega\) indicates robust satisfaction of \(\phi\). Therefore, the negative robustness degree \(-\text{RD}^{\phi }(X,t)\) is the cost random variable that is chosen as the input for the risk measure \(R\). This way, a large robustness degree results in a low cost. Finally, note that \(R(-\text{RD}^\phi (X^{\prime },t))\le R(-\text{RD}^\phi (X^{\prime \prime },t))\) implies that the stochastic process \(X^{\prime }\) is less risky than the stochastic process \(X^{\prime \prime }\) with respect to the specification \(\phi\).

3.3 The Approximate STL Robustness Risk

Unfortunately, the STL robustness risk \(R(-\text{RD}^{\phi }(X,t))\) can in general not be calculated, as the robustness degree in Definition 1 is difficult to calculate. Instead, we will focus on \(R(-\rho ^\phi (X,t))\) using the robust semantics as an approximation of the STL robustness risk.

Definition 4

(Approximate STL Robustness Risk).

Given an STL formula \(\phi\) and a stochastic process \(X:T\times \Omega \rightarrow \mathbb {R}^n\), the approximate risk of \(X\) lacking robustness against failure of \(\phi\) at time \(t\) is defined as \(\begin{align*} R(-\rho ^\phi (X,t)). \end{align*}\)

Fortunately, the approximate STL robustness risk \(R(-\rho ^\phi (X,t))\) over-approximates the STL robustness risk \(R(-\text{RD}^{\phi }(X,t))\) when \(R\) is a monotone risk measure, as shown next.

Theorem 3.

Let \(X\) be a stochastic process, \(\phi\) be an STL specification as in Equation (1), and \(R\) be a monotone risk measure. Then it holds that \(\begin{align*} R(-\text{RD}^{\phi }(X,t))\le R(-\rho ^\phi (X,t)). \end{align*}\)

The previous result is important, as using \(R(-\rho ^\phi (X,t))\) instead of \(R(-\text{RD}^{\phi }(X,t))\) will not result in an optimistic risk assessment. Especially in safety-critical applications, it is desirable to be more risk-averse as opposed to being overly optimistic.

Sometimes one may be interested in scaling the robustness degree to associate a monetary cost with \(\text{RD}^{\phi }(X,t)\) to reflect the severity of events with low robustness. Let us for this purpose consider an increasing cost function \(C:\mathbb {R}\rightarrow \mathbb {R}\).

Corollary 1.

Let \(X\) be a stochastic process, \(\phi\) be an STL specification as in Equation (1), \(R\) be a monotone risk measure, and \(C\) be an increasing cost function. Then it holds that \(\begin{align*} R(C(-\text{RD}^{\phi }(X,t)))\le R(C(-\rho ^\phi (X,t))). \end{align*}\)

Skip 4DATA-DRIVEN ESTIMATION OF THE APPROXIMATE STL ROBUSTNESS RISK Section

4 DATA-DRIVEN ESTIMATION OF THE APPROXIMATE STL ROBUSTNESS RISK

In this section, we show how the approximate STL robustness risk \(R(-\rho ^\phi (X,t))\) can be estimated from data. We assume that we have observed \(N\) independent realizations of the stochastic process \(X\), i.e., we know \(N\) realizations \(X(\cdot ,\omega ^1),... ,X(\cdot ,\omega ^N)\) where \(\omega ^1,... ,\omega ^N\in \Omega\) are drawn independently and according to the probability measure \(P\). A practical example would be a simulator from which we can unroll trajectories \(X(\cdot ,\omega ^i)\). For brevity, we denote \(X(\cdot ,\omega ^1),... ,X(\cdot ,\omega ^N)\) by \(X^1,... ,X^N\). In this way, one can think of \(X^1,... ,X^N\) as \(N\) independent copies of \(X\). We emphasize that we do not need knowledge of the distribution of \(X\). Our goal is to derive upper bounds of \(R(-\rho ^\phi (X,t))\) that hold with high probability. Let us, for convenience, first define the random variable \(\begin{align*} Z:=-\rho ^\phi (X,t). \end{align*}\)

For further convenience, let \(Z^i:=-\rho ^\phi (X^i,t)\) and let us also define the tuple \(\begin{align*} \mathcal {Z}:=(Z^1,... ,Z^N). \end{align*}\)

We consider the value-at-risk \(VaR_\beta (Z)\), the conditional value-at-risk \(CVaR_\beta (Z)\), and the mean \(E(Z)\). Particularly, we derive upper bounds \(\overline{VaR}_\beta (\mathcal {Z},\delta)\), \(\overline{CVaR}_\beta (\mathcal {Z},\delta)\), and \(\overline{E}(\mathcal {Z},\delta)\) that hold with a probability of at least \(1-\delta\). By Theorem 3 and Propositions 1, 2, and 3 (presented in the remainder), we then have computational algorithms to find tight upper bounds for the approximate STL robustness risk and hence for the STL robustness risk, and it holds that with a probability of \(1-\delta\) \(\begin{align*} &VaR_\beta (-\text{RD}^{\phi }(X,t))\le VaR_\beta (Z)\le \overline{VaR}_\beta (\mathcal {Z},\delta),\\ &CVaR_\beta (-\text{RD}^{\phi }(X,t))\le CVaR_\beta (Z)\le \overline{CVaR}_\beta (\mathcal {Z},\delta),\\ &E(-\text{RD}^{\phi }(X,t))\le E(Z)\le \overline{E}(\mathcal {Z},\delta). \end{align*}\)

4.1 Value-at-Risk (VaR)

For a risk level of \(\beta \in (0,1)\), recall that the VaR of \(Z\) is given by

where \(F_{Z}(\alpha)\) denotes the CDF of \(Z\). To estimate \(F_{Z}(\alpha)\), we define the empirical CDF as
where \(\mathbb {I}\) denotes the indicator function defined as
Let now \(\delta \in (0,1)\) be a probability threshold. Inspired by Szorenyi et al. [78], we calculate an upper bound of \(VaR_\beta (Z)\) as
and a lower bound as
where we recall that \(\inf \emptyset =\infty\) for \(\emptyset\) being the empty set due to the extended definition of the infimum operator. We next show that \(\overline{VaR}_\beta (\mathcal {Z},\delta)\) and \(\underline{VaR}_\beta (\mathcal {Z},\delta)\) are upper and lower bounds of \(VaR_\beta (Z)\), respectively, with a probability of at least \(1-\delta\).

Proposition 1.

Assume that \(F_Z\) is continuous and let \(\delta \in (0,1)\) be a probability threshold and \(\beta \in (0,1)\) be a risk level. Let \(\overline{VaR}_\beta (\mathcal {Z},\delta)\) and \(\underline{VaR}_\beta (\mathcal {Z},\delta)\) be based on the data \(\mathcal {Z}\). With a probability of at least \(1-\delta\), it holds that \(\begin{align*} \underline{VaR}_\beta (\mathcal {Z},\delta)\le VaR_\beta (Z)\le \overline{VaR}_\beta (\mathcal {Z},\delta). \end{align*}\)

We remark that Theorem 1 assumes that \(F_Z\) is continuous. If \(F_Z\) is not continuous, then one can derive upper and lower bounds by using order statistics following Nikolakakis et al. [57, Lemma 3].

4.2 Conditional Value-at-Risk (CVaR)

For a risk level of \(\beta \in (0,1)\), recall that the CVaR of \(Z\) is given by

where \([Z-\alpha ]^+:=\max (Z-\alpha ,0)\). For estimating \(CVaR_\beta (Z)\) from data \(\mathcal {Z}\), we focus here on the case where the random variable \(\rho ^\phi (X,t)\) (and hence \(Z\)) has bounded support for fixed \(t\). In particular, we assume that \(P(\rho ^\phi (X,t)\in [a,b])=1\). Note that \(\rho ^\phi (X,t)\) has bounded support when the function \(\rho ^\phi\) is bounded, which can be achieved either by construction of \(\phi\) or by clipping off \(\rho ^\phi\) outside the interval \([a,b]\) for some a priori chosen constants \(a\) and \(b\), i.e., values outside this interval are clipped to the end points \(a\) and \(b\) of the interval. We remark that clipping off \(\rho ^\phi\) is not restrictive in most practical applications, i.e., realizations of \(\rho ^\phi (X,t)\) that are larger than a sufficiently large value of \(b\gt 0\) indicate robust satisfaction of \(\phi\) and will not affect the risk associated with \(Z,\) while realizations of \(\rho ^\phi (X,t)\) smaller than \(a\lt 0\) violate the specification \(\phi\) already.9 We will provide illustrative examples in our simulations in Section 6. This boundedness assumption enables us now to directly leverage results from Wang and Gao [83] to estimate upper and lower bounds of \(CVaR_\beta (Z)\). Let us first define the empirical estimate of \(CVaR_\beta (Z)\) as

Based on Wang and Gao [83, Theorem 3.1], we can now calculate an upper bound of \(CVaR_\beta (Z)\) as \(\begin{align*} &\overline{CVaR}_\beta (\mathcal {Z},\delta):=\widehat{CVaR}_\beta (\mathcal {Z})+\sqrt {\frac{5\ln (3/\delta)}{N(1-\beta)}}(b-a) \end{align*}\)

and a lower bound as \(\begin{align*} &\underline{CVaR}_\beta (\mathcal {Z},\delta):=\widehat{CVaR}_\beta (\mathcal {Z})-\sqrt {\frac{11\ln (3/\delta)}{N(1-\beta)}}(b-a). \end{align*}\)

We would like to highlight that the upper and lower bounds \(\overline{CVaR}_\beta (\mathcal {Z},\delta)\) and \(\underline{CVaR}_\beta (\mathcal {Z},\delta)\), respectively, become less accurate with larger values of \((b-a)\), which we can account for by increasing the number of observed trajectories \(N\). The following proposition follows immediately from Wang and Gao [83, Theorem 3.1]:

Proposition 2.

Let \(\delta \in (0,1)\) be a probability threshold and \(\beta \in (0,1)\) be a risk level. Assume that \(P(\rho ^\phi (X,t)\in [a,b])=1\). Let \(\overline{CVaR}_\beta (\mathcal {Z},\delta)\) and \(\underline{CVaR}_\beta (\mathcal {Z},\delta)\) be based on the data \(\mathcal {Z}\). With a probability of at least \(1-\delta\), it holds that \(\begin{align*} \underline{CVaR}_\beta (\mathcal {Z},\delta)\le CVaR_\beta (Z)\le \overline{CVaR}_\beta (\mathcal {Z},\delta). \end{align*}\)

Remark 5.

The case where \(Z\) has unbounded support, but where \(Z\) is sub-Gaussian or sub-exponential has been considered in Bhat and L. A. [11], Brown [13], Kolla et al. [39], Mhammedi et al. [54], Thomas and Learned-Miller [79].

4.3 Mean

Define the empirical estimate of the mean \(E(Z)\) as \(\begin{align*} \widehat{E}(\mathcal {Z}):=\frac{1}{N}\sum _{i=1}^NZ^i. \end{align*}\) By the law of large numbers, \(\widehat{E}(\mathcal {Z})\) converges to \(E(Z)\) with probability one as \(N\) goes to infinity. For finite \(N\) and when again \(Z\) has bounded support, i.e., \(P(Z\in [a,b])=1\), we can apply Hoeffding’s inequality and calculate an upper \(\overline{E}(\mathcal {Z},\delta)\) of the mean \(E(Z)\) as \(\begin{align*} \overline{E}(\mathcal {Z},\delta)&:=\widehat{E}(\mathcal {Z})+\sqrt {\frac{\ln (2/\delta)}{2N}}(b-a) \end{align*}\) and a lower bound as \(\begin{align*} \underline{E}(\mathcal {Z},\delta)&:=\widehat{E}(\mathcal {Z})-\sqrt {\frac{\ln (2/\delta)}{2N}}(b-a). \end{align*}\) Similarly to the observation that we made for CVaR, note that the upper and lower bounds \(\overline{E}(\mathcal {Z},\delta)\) and \(\underline{E}(\mathcal {Z},\delta)\), respectively, become less accurate with increasing values of \((b-a)\) and more accurate with increasing \(N\). We next show that we indeed obtain valid upper and lower bounds.

Proposition 3.

Let \(\delta \in (0,1)\) be a probability threshold. Assume that \(P(\rho ^\phi (X,t)\in [a,b])=1\). Let \(\overline{E}(\mathcal {Z},\delta)\) and \(\underline{E}(\mathcal {Z},\delta)\) be based on the data \(\mathcal {Z}\). With a probability of at least \(1-\delta\), it holds that \(\begin{align*} \underline{E}(\mathcal {Z},\delta)\le E(Z)\le \overline{E}(\mathcal {Z},\delta). \end{align*}\)

Example 1

(continued).

We now modify Example 1 by considering that the regions \(C\) and \(D\) are not exactly known. Let \(c\) and \(d\) in Equations (3) and (4), respectively, be Gaussian random vectors as (6) \(\begin{align} c\sim \mathcal {N}\left(\begin{bmatrix}2 \\ 3 \end{bmatrix},\begin{bmatrix}0.2 & 0\\ 0 & 0.2 \end{bmatrix}\right)\!, \end{align}\) (7) \(\begin{align} d\sim \mathcal {N}\left(\begin{bmatrix}6 \\ 4 \end{bmatrix},\begin{bmatrix}0.2 & 0\\ 0 & 0.2 \end{bmatrix}\right)\!. \end{align}\) Consequently, the signals \(x_1\)-\(x_6\) become stochastic processes denoted by \(X_1\)-\(X_6\). Let now \(X_j^i\) denote the \(i\)th observed realization of \(X_j\) where \(j\in \lbrace 1,... ,6\rbrace\). Our first goal is to estimate \(VaR_\beta (Z)\) to compare the risk between the six robot trajectories \(r_1\)-\(r_6\). We set \(\delta :=0.01\) and \(N:=15,000\).10 The histograms of \(-\rho ^\phi (X_j)\) for each trajectory are shown in Figure 5. For different risk levels \(\beta\), the resulting upper and lower bounds for the value-at-risk are shown in the next table.

Across all \(\beta\), it can be observed that the estimate \(\overline{VaR}_\beta\) of \(VaR_\beta\) is relatively tight, as the difference \(|\overline{VaR}_\beta -\underline{VaR}_\beta |\) between upper and lower bounds is small. The table indicates that trajectories \(r_1\) and \(r_2\) are not favorable and are not robust. Recall that smaller risk values are favorable, as only negative values indicate actual robustness. Trajectory \(r_3\) is better compared to trajectories \(r_1\) and \(r_2\), but worse than \(r_4\)-\(r_6\) in terms of the approximate STL robustness risk of \(\phi\). For trajectories \(r_4\)-\(r_6\), note that a \(\beta =0.9\) provides the information that the trajectories have roughly the same approximate STL robustness risk. However, once the risk level \(\beta\) is increased to 0.925, 0.95, and 0.975, it becomes clear that \(r_6\) is preferable over \(r_4\) and \(r_5\). This matches with what one would expect by closer inspection of Figures 2 and 5.

We next estimate \(CVaR_\beta (Z)\) and therefore restrict \(\rho ^\phi\) to lie within \([-0.5,0.25]\) simply by clipping values that exceed this bound. This choice is motivated by our previous discussion in Section 4.2 and as \(\rho ^\phi\) is upper bounded by 0.25; see histograms in Figure 5. For different risk levels \(\beta\), the resulting upper and lower bounds for the conditional value-at-risk are shown next.

In general, the same observations regarding the ranking of \(r_1-r_6\) can be made based on the conditional value-at-risk. However, the risk levels are in general much higher, as \({CVaR}_\beta\) is more risk-sensitive than \({VaR}_\beta\). An important observation is that the estimates \(\overline{CVaR}_\beta\) of \(CVaR_\beta\) are not as tight as before for \({VaR}_\beta\), as the difference \(|\overline{CVaR}_\beta -\underline{CVaR}_\beta |\) is larger, particularly for larger \(\beta\) due to the division by \(1-\beta\) in the estimates of \(\overline{CVaR}_\beta\) and \(\underline{CVaR}_\beta\). For completeness, we also report the estimated mean of \(Z\).

Skip 5EXACT COMPUTATION OF THE APPROXIMATE STL ROBUSTNESS RISK Section

5 EXACT COMPUTATION OF THE APPROXIMATE STL ROBUSTNESS RISK

In the previous section, we estimated the approximate STL robustness risk using observed realizations \(X^1,... ,X^N\) of the stochastic process \(X\). In this section, we instead assume to know the distribution of \(X\). There are two main challenges in computing the approximate STL robustness risk \(R(-\rho ^\phi (X,t))\) from the distribution of \(X\). First, note that exact computation of \(R(-\rho ^\phi (X,t))\) requires knowledge of the CDF of \(\rho ^\phi (X,t)\). However, the CDF of \(\rho ^\phi (X,t)\) is in general not known and often hard to obtain analytically. Second, calculating \(R(-\rho ^\phi (X,t))\) may often involve solving high-dimensional integrals for which in most of the cases no closed-form expressions exists. For these reasons, we assume in this section that the STL formula \(\phi\) is bounded and that \(X:T\times \Omega \rightarrow \mathcal {X}\) is a discrete-time stochastic process, i.e., \(T:=\mathbb {Z}\), with a finite state space \(\mathcal {X}\subseteq \mathbb {R}^n\) (i.e., the set \(\mathcal {X}\) consists of a finite set of elements).

Recall that the time intervals \(I\) contained in a bounded STL formula \(\phi\) are compact. The satisfaction of such an STL formula can hence be decided by finite signals. A bounded STL formula \(\phi\) has a future formula length \(L^\phi _f\in \mathbb {Z}\) and a past formula length \(L^\phi _p\in \mathbb {Z}\). The future formula length \(L^\phi _f\) can be calculated, similarly to Sadraddini and Belta [68], as

The past formula length \(L^\phi _p\) can be calculated similarly as

A finite signal of length \(L^\phi _f+L^\phi _p\) is now sufficient to determine if \(\phi\) is satisfied at time \(t\). In particular, information from the time interval \(T_L:=\lbrace t-L^\phi _p,... ,t,... ,t+L^\phi _f\rbrace\) is sufficient to determine if \(\phi\) is satisfied at time \(t\). Now, let \(X:\Omega \times T_L\rightarrow \mathcal {X}\) be the discrete-time stochastic process under consideration where the state space \(\mathcal {X}\subseteq \mathbb {R}^n\) is a finite set. Note that we can always obtain such a finite set \(\mathcal {X}\) from a continuous state space by discretization. Let the probability mass function (PMF) \(f_X(x)\) of \(X\) be given. The next result is stated without proof, as it follows immediately from the fact that \(T_L\) and \(\mathcal {X}\), and consequently the set of signals \(\mathfrak {F}(T_L,\mathcal {X})\) are finite sets.

Proposition 4.

Let \(\phi\) be a bounded STL formula with future and past formula lengths \(L^\phi _f\) and \(L^\phi _p\), respectively. Let \(X:\Omega \times T_L\rightarrow \mathcal {X}\) be a discrete-time stochastic process with a finite state space \(\mathcal {X}\). For \(t\in \mathbb {R}\), we can calculate the PMF \(f_Z(z)\) and the CDF \(F_Z(z)\) of \(Z\) as

Note that \(F_Z(z)=\sum _{z^{\prime }\le z}f_Z(z^{\prime })\) holds as required. Having obtained the PMF \(f_Z(z)\) and the CDF \(F_Z(z)\) of \(Z\), it is now straightforward to calculate \(R(Z)\) for various risk measures \(R\). Note, in particular, that \(Z\) is a discrete random variable so \(f_Z(z)\) is discrete and \(F_Z(z)\) is piecewise-continuous, hence simplifying the calculation of \(R(Z),\) as no high-dimensional integrals need to be solved.

Example 1

(continued).

Recall that \(c\) and \(d\) were assumed to be Gaussian distributed according to Equations (6) and (7), respectively. We first discretize the distributions of \(c\) and \(d\); see Appendix G for details. From the PMFs \(f_c\) and \(f_d\), we can now calculate the PMF \(f_X(x)\) for any \(x \in \mathfrak {F}(T_L,\mathbb {R}^6)\times \mathcal {C}\times \mathcal {D}\) where \(\mathcal {C}\) and \(\mathcal {D}\) are the discretized domains of \(c\) and \(d\). We can hence calculate \(f_Z(z)\) according to Proposition 4. From this, the value at risk \(VaR_\beta (Z)\) can be calculated, which is reported in the next table.

It can be seen that the STL robustness risks reported above closely resemble the sampling-based estimates \(\overline{VaR}_\beta\) of \({VaR}_\beta\) from Section 4.

Skip 6SIMULATIONS: AUTONOMOUS DRIVING IN CARLA Section

6 SIMULATIONS: AUTONOMOUS DRIVING IN CARLA

We consider the verification of neural network-based lane-keeping controllers for lateral control in the autonomous driving simulator CARLA [19]; see Figure 1 (left). Lane-keeping in CARLA is achieved by tracking a set of predefined waypoints. For longitudinal control, a built-in PID controller is used to stabilize the car at 20 km/h. We particularly trained four different neural network controllers as detailed below. Our overall goal is to estimate and compare the risks of these four controllers for five different specifications during a double left turn; see Figure 1 (middle).

Fig. 1.

Fig. 1. Left: Simulation environment in the autonomous driving simulator CARLA. Middle: Double left turn on which we evaluate four trained neural network lane-keeping controllers. Right: Cross-track error \(c_e\) and orientation error \(\theta _e\) used for risk verification of the neural network controllers.

Fig. 2.

Fig. 2. The figure shows six potential robot trajectories \(r_1\) - \(r_6\) and the four regions \(A\) , \(B\) , \(C\) , and \(D\) . The specification given in Equation (2) is violated by \(r_1\) and satisfied by \(r_2\) - \(r_6\) . It can be seen that \(r_2\) only marginally satisfies \(\phi\) , while \(r_3\) - \(r_6\) satisfy \(\phi\) robustly.

Fig. 3.

Fig. 3. Illustration of the expected value, the value-at-risk, and the conditional value-at-risk.

Fig. 4.

Fig. 4. Left: 200 realizations of the voltage \(V(t)\) over the capacitor of an RC circuit. Right: Histogram of the negative robustness degree \(-\text{RD}^{\phi }(V,0)\) of the specification \(\phi :=G_{[2,\infty)}(V\le 1)\) .

Fig. 5.

Fig. 5. Histogram of \(-\text{RD}^{\phi }(X_j,0)\) of the specification \(\phi\) in (2) for robot trajectories \(j\in \lbrace 1,... ,6\rbrace\) .

For the verification and comparison of these controllers, we are particularly interested in the cross-track error, which is a measure of the closest distance from the car to the path defined by the set of waypoints, as illustrated in Figure 1 (right). Formally, let \(wp_1\) be the waypoint that is closest to the car and let \(wp_2\) be the waypoint proceeding \(wp_1\). Then the cross-track error is defined as \(c_e:=\Vert w\Vert \sin (\theta _w),\) where \(w\) is the vector pointing from \(wp_1\) to the car and \(\theta _w\) is the angle between \(w\) and the vector pointing from \(wp_1\) to \(wp_2\). We are also interested in the orientation error \(\theta _e:=\theta _t-\theta\) between the orientation of the reference path \(\theta _t\) and the orientation of the car \(\theta\).

The state \(x:=(c_e,\theta _e,v,d,\dot{\theta }_t)\) of the car consists of the cross-track error \(c_e\), the orientation error \(\theta _e\), the velocity \(v\) of the car, the internal state \(d\) of the longitudinal PID controller, and the rate \(\dot{\theta }_t\) at which the orientation of the reference path changes. The control input for which we aim to learn and verify a lane-keeping controller is the steering angle \(u\).

6.1 Training Neural Network Lane-keeping Controllers

We have trained four different neural network controllers. Two of these four controllers were obtained by using supervised imitation learning (IL) [65], while the other two controllers were obtained by learning control barrier functions (CBFs) from expert demonstrations [49].

To obtain two imitation learning controllers, we used a CARLA built-in PID controller \(u^*\) as an expert controller to collect expert trajectories, which are sequences of state and control input pairs. The first IL controller, denoted as IL\(_\text{full}\), is trained using the full state \(x\) as an input to the neural network, while the control input \(u\) is the output. The second IL controller, denoted as IL\(_\text{partial}\), is trained by only using partial state knowledge. In particular, only the cross-track error \(c_e\), the orientation error \(\theta _e\), and the rate \(\dot{\theta }_t\) at which the orientation of the path changes are used here as an input to the neural network. We used one-layer neural networks with 20 neurons per layer and ReLU activation functions and trained with the mean squared error as the loss function.

Remark 6.

For simplicity, we did not attempt to address the distribution shift between the expert controller and the trained controller, e.g., by using DAGGER [66]. We remark that our primary goal lies in the verification and comparison of risk between controllers.

To obtain the CBF-based controllers, we again used the expert controller \(u^*\) to get expert trajectories from which we learned robust control barrier functions following Lindemann et al. [49]. The first controller, denoted as CBF\(_\text{full}\), uses again full state knowledge of \(x\). The second controller, denoted as CBF\(_\text{partial}\), estimates the cross-track error \(c_e\) from RGB dashboard camera images while assuming knowledge of the remaining states; see Lindemann et al. [49] for details. Both neural network controllers consist of two layers with 32 and 16 neurons and tanh activation functions.

6.2 Risk Verification and Comparison

For the risk verification and comparison of these four controllers, we tested each of them on the training course; see Figure 1 (middle). We uniformly sampled the initial position of the car in a range of \(c_e\in [-1,1]\) m and \(\theta _e\in [-0.4,0.4]\) rad and added normally distributed noise in a range of \([-0.1,0.1]\) rad to the control input to simulate actuation noise so the car becomes a stochastic process \(X\). We collected \(N:=1,\!000\) trajectories for each controller, of which 600 are shown in Figure 6. From a visual inspection, we can already see that the controllers that use full state knowledge (IL\(_\text{full}\), CBF\(_\text{full}\)) outperform the controllers that only use partial state knowledge (IL\(_\text{partial}\), CBF\(_\text{partial}\)). Videos of each controller from five different initial conditions are provided under https://tinyurl.com/48xjf545.

Fig. 6.

Fig. 6. Shown are 600 trajectories for each of the four controllers during the double left turn. Trajectories marked in red led to a collision with an obstacle.

To obtain a more formal assessment, we next estimate the risk of each controller with respect to: (1) the cross-track error over the whole trajectory, during steady state, and during the transient phase, (2) the responsiveness of the controller, and (3) the orientation error.

6.2.1 Cross-track Error.

The specification that we look at here is that the cross-track error \(c_e\) should always be within the interval \([-2.25,2.25]\), where 2.25 is a threshold that we selected based on the cross-track error induced by the expert controller \(u^*\). In STL language, we have \(\begin{align*} \phi _1:=G_{[0,\infty)} (|c_e|\le 2.25). \end{align*}\)

We show the histograms of \(\rho ^{\phi _1}(X,0)\) for each controller in Figure 7(a) (left).11 We are particularly interested in the controllers IL\(_\text{full}\) and CBF\(_\text{full}\) and show their histograms isolated in Figure 7(a) (right) for better readability. Selecting \(\delta :=0.01\), the estimates of \({VaR}_{0.85}\), \({VaR}_{0.95}\), \({CVaR}_{0.85}\), and \(E\) are reported in the table below. In the last column, we have additionally reported the empirical probability that the specification \(\phi _1\) is satisfied, which we calculate as

For each risk measure, we highlight the controller with the lowest risk in green.

Fig. 7.

Fig. 7. Histograms of \(-\rho ^{\phi _i}(X,0)\) for each controller for the specifications \(\phi _1\) - \(\phi _5\) .

Based on these risk estimates, we make the following observations:

As expected from the visual inspection of Figure 6, the controllers IL\(_\text{partial}\) and CBF\(_\text{partial}\) perform poorly. Among these two, CBF\(_\text{partial}\) performs slightly better in terms of risk than IL\(_\text{partial}\).

The controllers IL\(_\text{full}\) and CBF\(_\text{full}\) perform better. The risk of CBF\(_\text{full}\) in terms of the expected value \(\overline{E}\) is smaller than the risk of IL\(_\text{full}\). Interestingly, the risk of IL\(_\text{full}\) in terms of the \(\overline{VaR}_{0.85}\), \(\overline{VaR}_{0.95}\), and \(\overline{CVaR}_{0.85}\) is smaller than the risk of CBF\(_\text{full}\). This is due to the long tail induced by CBF\(_\text{full}\); see Figure 7(a) (right). We hence argue that IL\(_\text{full}\) is the better choice with respect to \(\phi _1\).

The estimate \(\overline{CVaR}_{0.85}\) of \({CVaR}_{0.85}\) is not tight and very conservative. The difference \(|\overline{CVaR}_{0.85}-\underline{CVaR}_{0.85}|\) between the upper and lower bounds is large. To make this bound tighter, more data \(N\) is needed. We neglect the conditional value-at-risk in the remainder.

In this case, it can be observed that a low empirical satisfaction probability \(\#_{\phi _1}\) correlates with a high risk. We remark that this is not always the case, as risk considers characteristics of the right tail of the distribution \(-\rho ^{\phi _1}(X,0)\), while satisfaction probabilities focus on the left tail of this distribution. This can be observed when we present the results for specification \(\phi _5\).

We formulate the hypothesis that the long tail of CBF\(_\text{full}\) that makes CBF\(_\text{full}\) more risky than IL\(_\text{full}\) is induced by the transient behavior. We analyze this hypothesis in detail in the remainder looking at the specifications \(\phi _2\) (steady-state) and \(\phi _3\) (transient phase).

6.2.2 Steady-state.

In the previous section, we concluded that IL\(_\text{full}\) is the best controller for the specification \(\phi _1\), i.e., when considering the cross-track error \(c_e\) over the whole trajectory. We now study the steady-state behavior of each controller in terms of \(c_e\) and reveal that CBF\(_\text{full}\) is the least risky controller when only looking at the steady-state. Therefore, we check if the cross-track error \(c_e\) is always within the interval \([-2.25,2.25]\) after 10 s by the specification \(\begin{align*} \phi _2:=G_{[10,\infty)} (|c_e|\le 2.25). \end{align*}\)

We show the histograms of \(\rho ^{\phi _2}(X,0)\) for each controller Figure 7(b) and report the risk estimates below.

Based on these risk estimates, we make the following observations:

We see that our stated hypothesis is true and observe that CBF\(_\text{full}\) now has the least risky behavior for all risk measures with respect to \(\phi _2\), i.e., during steady state.

For CBF\(_\text{full}\), we have \(\overline{VaR}_{0.95}(-\rho ^{\phi _2}(X,0))=-0.794\). Consequently, for at most 5% of the realizations the robustness is less than 0.794.

6.2.3 Transient Phase.

Complementary to the previous analysis, we now look at the transient behavior of the cross-track error \(c_e\) of each controller by imposing the specification \(\begin{align*} \phi _3:=F_{[0,5]}G_{[0,5]}(|c_e|\le 1.25). \end{align*}\) In other words, the specification \(\phi _3\) requires that eventually within the first 5 s the absolute value of the cross-track error falls below the threshold 1.25 for at least 5 s. We show the histogram of each controller in Figure 7(c) and report the corresponding risk estimates next.

For \(\phi _3\), we see a similar result as for \(\phi _1\) in the sense that IL\(_\text{full}\) is the least risky controller, but now clearly indicating that IL\(_\text{full}\) is the less risky controller across all risk measures. It is also worth pointing out that CBF\(_\text{full}\) and CBF\(_\text{partial}\) have almost the same expected value, while \(\overline{VaR}_{0.85}\), \(\overline{VaR}_{0.9}\), and \(\overline{VaR}_{0.95}\) indicate that CBF\(_\text{full}\) is less risky.

Summarizing the observations from \(\phi _1\), \(\phi _2\), and \(\phi _3\), IL\(_\text{full}\) is the least risky controller during the transient phase and CBF\(_\text{full}\) is the least risky controller during steady-state.

6.2.4 Responsiveness.

So far, we focused on the cross-track error during steady-state and transient phase. We now analyze the responsiveness of the controllers when the cross-track error gets too large. We particularly analyze how responsive the controllers are in such situations and how quickly they can decrease the error again to an acceptable level. Let us therefore look at the specification \(\begin{align*} \phi _4:=G_{[10,\infty)} \big ((|c_e|\ge 1.25) \Rightarrow F_{[0,5]}G_{[0,5]}(|c_e|\le 1.25)\big). \end{align*}\) In other words, whenever the cross-track error \(c_e\) leaves the interval \([-1.25,1.25]\) after the transient phase has died out (approximately after 10 s), it should hold that within the next 5 s the cross-track error is again within the interval \([-1.25,1.25]\) for at least 5 s. We show the histogram of each controller in Figure 7(d) and report the corresponding risk estimates below.

The results are interesting in the sense that the risk of IL\(_\text{full}\) and CBF\(_\text{full}\) in terms of the expected value are almost identical, even slightly favoring IL\(_\text{full}\), while the risk of CBF\(_\text{full}\) in terms of \(\overline{VaR}_{0.85}\), \(\overline{VaR}_{0.9}\), and \(\overline{VaR}_{0.95}\) is much smaller.

6.2.5 Orientation Error.

Let us now focus on the orientation error \(\theta _e\). In general, an orientation error is expected when either the orientation \(\theta _t\) of the reference path changes or the car tries to reduce the cross-track error \(c_e\) by adjusting \(\theta\), e.g., when \(|c_e|\gt 0\), we need \(|\theta _e|\gt 0\) to reduce \(|c_e|\) (see Figure 1). To analyze how well the orientation error is adjusted when the cross-track error leaves the interval \([-1.25,1.25]\), we consider the specification \(\begin{align*} \phi _5:=G_{[0,\infty)} \big (&(c_e\ge 1.25) \Rightarrow F_{[0,2]}G_{[0,1]}(\theta _e\le 0)\wedge (c_e\le -1.25) \Rightarrow F_{[0,2]}G_{[0,1]}(\theta _e\ge 0)\big). \end{align*}\) The specification \(\phi _5\) encodes that, whenever the cross-track error \(c_e\) leaves the interval \([-1.25,1.25]\), the orientation error \(\theta _e\) should, within 2 s, be such that the cross-track error decreases for at least 1 s. We show the histogram of each controller in Figure 7(r) and report the risk estimates below.

We can observe that the risk of IL\(_\text{full}\) is the lowest for \(\overline{VaR}_{0.85}\) and \(\overline{VaR}_{0.9}\), while the risks of IL\(_\text{full}\) and CBF\(_\text{full}\) are roughly equal for the expected value \(\overline{E}\). However, the distribution induced by IL\(_\text{full}\) has a long tail, which is why the risk of CBF\(_\text{full}\) is the lowest for \(\overline{VaR}_{0.95}\).

Skip 7CONCLUSION Section

7 CONCLUSION

We defined the STL robustness risk to quantify the risk of a stochastic system lacking robustness against failure of an STL specification. The approximate STL robustness risk was defined as a computationally tractable upper bound of the STL robustness risk. It was shown how the approximate STL robustness risk is estimated from data for the value-at-risk and the conditional value-at-risk. We also provided conditions under which the approximate STL robustness risk can be computed exactly. Within the autonomous driving simulator CARLA, we trained four different neural network lane-keeping controllers and estimated their risk for five different STL system specifications.

APPENDICES

A SEMANTICS OF SIGNAL TEMPORAL LOGIC

The satisfaction function \(\beta ^\phi (x,t)\) determines whether or not the signal \(x\) satisfies the specification \(\phi\) at time \(t\). The definition of \(\beta ^\phi (x,t)\) follows recursively from the structure of \(\phi\) as follows:

Definition 5

(STL Semantics).

For a signal \(x:T\rightarrow \mathbb {R}^n\) and an STL formula \(\phi\), the satisfaction function \(\beta ^\phi (x,t)\) is recursively defined as \(\begin{align*} \beta ^\top (x,t)&:=\top , \\ \beta ^\mu (x,t)&:={\left\lbrace \begin{array}{ll} \top &\text{ if } x(t)\in O^\mu \\ \bot &\text{ otherwise, } \end{array}\right.}\\ \beta ^{\lnot \phi }(x,t)&:= \lnot \beta ^{\phi }(x,t),\\ \beta ^{\phi ^{\prime } \wedge \phi ^{\prime \prime }}(x,t)&:=\min (\beta ^{\phi ^{\prime }}(x,t),\beta ^{\phi ^{\prime \prime }}(x,t)),\\ \beta ^{\phi ^{\prime } U_I \phi ^{\prime \prime }}(x,t)&:=\sup _{t^{\prime \prime }\in (t\oplus I)\cap T}\Big (\min \big (\beta ^{\phi ^{\prime \prime }}(x,t^{\prime \prime }),\inf _{t^{\prime }\in (t,t^{\prime \prime })\cap T}\beta ^{\phi ^{\prime }}(x,t^{\prime })\big)\Big),\\ \beta ^{\phi ^{\prime } \underline{U}_I \phi ^{\prime \prime }}(x,t)&:=\sup _{t^{\prime \prime }\in (t\ominus I)\cap T}\Big (\min \big (\beta ^{\phi ^{\prime \prime }}(x,t^{\prime \prime }),\inf _{t^{\prime }\in (t^{\prime \prime },t)\cap T}\beta ^{\phi ^{\prime }}(x,t^{\prime })\big)\Big). \end{align*}\)

The semantics in Definition 5 use the strict non-matching versions \(U_I\) and \(\underline{U}_I\) of the until operators. The non-strict matching versions of the until operator, in comparison, replace the open time intervals \((t,t^{\prime \prime })\) in Definition 5 by the closed time intervals \([t,t^{\prime \prime }]\) as follows: \(\begin{align*} \beta ^{\phi ^{\prime } \vec{U}_I \phi ^{\prime \prime }}(x,t)&:=\sup _{t^{\prime \prime }\in (t\oplus I)\cap T}\Big (\min \big (\beta ^{\phi ^{\prime \prime }}(x,t^{\prime \prime }),\inf _{t^{\prime }\in [t,t^{\prime \prime }]\cap T}\beta ^{\phi ^{\prime }}(x,t^{\prime })\big)\Big),\\ \beta ^{\phi ^{\prime } \vec{\underline{U}}_I \phi ^{\prime \prime }}(x,t)&:=\sup _{t^{\prime \prime }\in (t\ominus I)\cap T}\Big (\min \big (\beta ^{\phi ^{\prime \prime }}(x,t^{\prime \prime }),\inf _{t^{\prime }\in [t^{\prime \prime },t]\cap T}\beta ^{\phi ^{\prime }}(x,t^{\prime })\big)\Big). \end{align*}\)

B PROOF OF THEOREM 1

We prove the statement of Theorem 1 first for the semantics \(\beta ^\phi (X,t)\), then for the robust semantics \(\rho ^\phi (X,t)\), and finally for the robustness degree \(\text{RD}^{\phi }(X,t)\).

Skip B.1Semantics βϕ (X,t) Section

B.1 Semantics βϕ (X,t)

Let us define the power set of \(\mathbb {B}\) as \(2^\mathbb {B}:=\lbrace \emptyset ,\top ,\bot ,\lbrace \bot ,\top \rbrace \rbrace\). Note that \(2^\mathbb {B}\) is a \(\sigma\)-algebra of \(\mathbb {B}\). To prove measurability of \(\beta ^\phi (X(\cdot ,\omega),t)\) in \(\omega\) for a fixed \(t\in T\), we need to show that, for each \(B\in 2^\mathbb {B}\), it holds that the inverse image of \(B\) under \(\beta ^\phi (X(\cdot ,\omega),t)\) for a fixed \(t\in T\) is contained within \(\mathcal {F}\), i.e., that it holds that \(\begin{align*} \lbrace \omega \in \Omega | \beta ^\phi (X(\cdot ,\omega),t)\in B\rbrace \subseteq \mathcal {F}. \end{align*}\) We show measurability of \(\beta ^\phi (X(\cdot ,\omega),t)\) in \(\omega\) for a fixed \(t\in T\) inductively on the structure of \(\phi\).

\(\top\): For \(B\in 2^\mathbb {B}\), it trivially holds that \(\lbrace \omega \in \Omega | \beta ^\top (X(\cdot ,\omega),t)\in B\rbrace \subseteq \mathcal {F}\), since \(\beta ^\top (X(\cdot ,\omega),t)=\top\) for all \(\omega \in \Omega\). This follows according to Definition 5 so \(\lbrace \omega \in \Omega | \beta ^\top (X(\cdot ,\omega),t)\in B\rbrace =\emptyset \subseteq \mathcal {F}\) if \(B\in \lbrace \emptyset ,\bot \rbrace\) and \(\lbrace \omega \in \Omega | \beta ^\top (X(\cdot ,\omega),t)\in B\rbrace =\Omega \subseteq \mathcal {F}\) otherwise.

\(\mu\): Let \(1_{O^\mu }:\mathbb {R}^n\rightarrow \mathbb {B}\) be the indicator function of \(O^\mu\) with \(1_{O^\mu }(\zeta):=\top\) if \(\zeta \in O^\mu\) and \(1_{O^\mu }(\zeta):=\bot\) otherwise. According to Definition 5, we can now write \(\beta ^{\mu }(X(\cdot ,\omega),t)=1_{O^\mu }(X(t,\omega))\). Recall that \(O^\mu\) is measurable and note that the indicator function of a measurable set is measurable again (see, e.g., Durrett [20, Chapter 1.2]). Since \(X(t,\omega)\) is measurable in \(\omega\) for a fixed \(t\in T\) by definition, it follows that \(1_{O^\mu }(X(t,\omega))\) and hence \(\beta ^{\mu }(X(\cdot ,\omega),t)\) is measurable in \(\omega\) for a fixed \(t\in T\). In other words, for \(B\in 2^\mathbb {B}\), it follows that \(\begin{align*} \lbrace \omega \in \Omega | \beta ^{\mu }(X(\cdot ,\omega),t)\in B\rbrace =\lbrace \omega \in \Omega |1_{O^\mu }(X(t,\omega))\in B\rbrace \subseteq \mathcal {F}. \end{align*}\)

\(\lnot \phi\): By the induction assumption, \(\beta ^{\phi }(X(\cdot ,\omega),t)\) is measurable in \(\omega\) for a fixed \(t\in T\). Recall that \(\mathcal {F}\) is a \(\sigma\)-algebra that is, by definition, closed under its complement so, for \(B\in 2^\mathbb {B}\), it holds that \(\begin{align*} \lbrace \omega \in \Omega | \beta ^{\lnot \phi }(X(\cdot ,\omega),t)\in B\rbrace =\Omega \setminus \lbrace \omega \in \Omega | \beta ^{\phi }(X(\cdot ,\omega),t)\in B\rbrace \subseteq \mathcal {F}. \end{align*}\)

\(\phi ^{\prime }\wedge \phi ^{\prime \prime }\): By the induction assumption, \(\beta ^{\phi ^{\prime }}(X(\cdot ,\omega),t)\) and \(\beta ^{\phi ^{\prime \prime }}(X(\cdot ,\omega),t)\) are measurable in \(\omega\) for a fixed \(t\in T\). Hence, \(\beta ^{\phi ^{\prime }\wedge \phi ^{\prime \prime }}(X(\cdot ,\omega),t)=\min (\beta ^{\phi ^{\prime }}(X(\cdot ,\omega),t),\beta ^{\phi ^{\prime \prime }}(X(\cdot ,\omega),t))\) is measurable in \(\omega\) for a fixed \(t\in T\), since the min operator of measurable functions is again a measurable function.

\(\phi ^{\prime } U_I \phi ^{\prime \prime }\) and \(\phi ^{\prime } \underline{U}_I \phi ^{\prime \prime }\): Recall the definition of the future until operator \(\begin{align*} \beta ^{\phi ^{\prime } U_I \phi ^{\prime \prime }}(X(\cdot ,\omega),t) := \underset{t^{\prime \prime }\in (t\oplus I)\cap T}{\text{sup}} \big (\min (\beta ^{\phi ^{\prime \prime }}(X(\cdot ,\omega),t^{\prime \prime }),\underset{t^{\prime }\in (t,t^{\prime \prime })\cap T}{\text{inf}}\beta ^{\phi ^{\prime }}(X(\cdot ,\omega),t^{\prime }))\big). \end{align*}\) By the induction assumption, \(\beta ^{\phi ^{\prime }}(X(\cdot ,\omega),t)\) and \(\beta ^{\phi ^{\prime \prime }}(X(\cdot ,\omega),t)\) are measurable in \(\omega\) for a fixed \(t\in T\). First note that \((t,t^{\prime \prime })\cap T\) and \((t\oplus I)\cap T\) are countable sets, since \(T=\mathbb {N}\). According to Guide [26, Theorem 4.27], the supremum and infimum operators over a countable number of measurable functions is again measurable. Consequently, the function \(\beta ^{\phi ^{\prime } U_I \phi ^{\prime \prime }}(X(\cdot ,\omega),t)\) is measurable in \(\omega\) for a fixed \(t\in T\). The same reasoning applies to \(\beta ^{\phi ^{\prime } \underline{U}_I \phi ^{\prime \prime }}(X(\cdot ,\omega),t)\).

Skip B.2Robust Semantics ρϕ (X,t) Section

B.2 Robust Semantics ρϕ (X,t)

The proof for \(\rho ^\phi (X(\cdot ,\omega),t)\) follows again inductively on the structure of \(\phi\), and the goal is to show that \(\lbrace \omega \in \Omega | \rho ^\phi (X(\cdot ,\omega),t)\in B\rbrace \subseteq \mathcal {F}\) for each Borel set \(B\in \mathcal {B}\). The difference here, compared to the proof for the semantics \(\beta ^\phi (X(\cdot ,\omega),t)\) presented above, lies only in the way predicates \(\mu\) are handled. Note first that we can write \(\rho ^\mu (X(\cdot ,\omega),t)\) as (8) \(\begin{align} \begin{split}\rho ^\mu (X(\cdot ,\omega),t)&=0.5(1_{O^\mu }(X(t,\omega))+1)\bar{d}(X(t,\omega),\text{cl}(O^{\lnot \mu }))\\ &+0.5(1_{O^\mu }(X(t,\omega))-1) \bar{d}(X(t,\omega),\text{cl}(O^\mu)), \end{split} \end{align}\) where we recall that we interpret \(\top :=1\) and \(\bot =-1\). Since the composition of the indicator function with \(X(t,\omega)\), i.e., \(1_{O^\mu }(X(t,\omega))\), is measurable in \(\omega\) for a fixed \(t\in T\) as argued before, we only need to show that \(\bar{d}(X(t,\omega),\text{cl}(O^\mu))\) and \(\bar{d}(X(t,\omega),\text{cl}(O^{\lnot \mu }))\) are measurable in \(\omega\) for a fixed \(t\in T\). This immediately follows, since \(X(t,\omega)\) is measurable in \(\omega\) for a fixed \(t\in T\) by definition and, since the function \(\bar{d}\) is continuous in its first argument, and hence measurable (see Guide [26, Corollary 4.26]), due to \(d\) being a metric defined on the set \(\mathbb {R}^n\) (see, e.g., Munkres [56, Chapter 3]) so \(\rho ^\mu (X(\cdot ,\omega),t)\) is measurable in \(\omega\) for a fixed \(t\in T\).

Skip B.3Robustness Degree RDϕ(X,t) Section

B.3 Robustness Degree RDϕ(X,t)

For \(\text{RD}^\phi (X(\cdot ,\omega),t)\), note that, for a fixed \(t\in T\), the function \(\text{RD}^\phi\) maps from the domain \(\mathfrak {F}(T,\mathbb {R}^n)\) into the domain \(\mathbb {R}\), while \(X(\cdot ,\omega)\) maps from the domain \(\Omega\) into the domain \(\mathfrak {F}(T,\mathbb {R}^n)\). Recall now that \(\text{RD}^\phi (X(\cdot ,\omega),t)=\bar{\kappa }(X(\cdot ,\omega),\text{cl}(\mathcal {L}^\phi (t))):=\inf _{x^*\in \text{cl}(\mathcal {L}^\phi (t))}\kappa (X(\cdot ,\omega),x^*)\) and that \(\kappa\) is a metric defined on the set \(\mathfrak {F}(T,\mathbb {R}^n),\) as argued in Fainekos and Pappas [21]. Therefore, it follows that the function \(\bar{\kappa }\) is continuous in its first argument (see, e.g., Munkres [56, Chapter 3]), and hence measurable with respect to the Borel \(\sigma\)-algebra of \(\mathfrak {F}(T,\mathbb {R}^n)\) (see, e.g., Guide [26, Corollary 4.26]). Consequently, the function \(\text{RD}^\phi :\mathfrak {F}(T,\mathbb {R}^n)\times T\rightarrow \mathbb {R}^n\) is measurable in its first argument for a fixed \(t\in T\). As \(T\) is countable and \(X\) is a discrete-time stochastic process, it follows that \(X(\cdot ,\omega)\) is measurable with respect to the product \(\sigma\)-algebra of Borel \(\sigma\)-algebras \(\mathcal {B}^n\), which is equivalent to the Borel \(\sigma\)-algebra of \(\mathfrak {F}(T,\mathbb {R}^n)\) (see, e.g., Kallenberg [36, Lemma 1.2]). Since function composition preserves measurability, it holds that \(\text{RD}^\phi (X(\cdot ,\omega),t)\) is measurable in \(\omega\) for a fixed \(t\in T\).

C PROOF OF THEOREM 2

We prove the statement of Theorem 2 first for the robustness degree \(\text{RD}^{\phi }(X,t)\), and finally for the semantics \(\beta ^\phi (X,t)\), then for the robust semantics \(\rho ^\phi (X,t)\).

Skip C.1Semantics βϕ (X,t) Section

C.1 Semantics βϕ (X,t)

The proof again follows inductively on the structure of \(\phi\). The difference to the proof of Theorem 1 lies in the way the until operators are handled, which are now assumed to be the non-strict matching versions \(\phi ^{\prime } \vec{U}_I \phi ^{\prime \prime }\) and \(\phi ^{\prime } \vec{\underline{U}}_I \phi ^{\prime \prime }\). Note also that the time interval \(I\) is compact, as the formula \(\phi\) is assumed to be bounded. The main idea is to show that infimum and supremum operators reduce to minimum and maximum operators that allow us to show measurability. Recall, therefore, the definition of the future until operator \(\beta ^{\phi ^{\prime } \vec{U}_I \phi ^{\prime \prime }}(X(\cdot ,\omega),t)\) as \(\begin{align*} \beta ^{\phi ^{\prime } \vec{U}_I \phi ^{\prime \prime }}(X(\cdot ,\omega),t)&:=\sup _{t^{\prime \prime }\in (t\oplus I)\cap T}\Big (\min \big (\beta ^{\phi ^{\prime \prime }}(X(\cdot ,\omega),t^{\prime \prime }),\inf _{t^{\prime }\in [t,t^{\prime \prime }]\cap T}\beta ^{\phi ^{\prime }}(X(\cdot ,\omega),t^{\prime })\big)\Big). \end{align*}\)

We first show that the infimum operator in \(\beta ^{\phi ^{\prime } \vec{U}_I \phi ^{\prime \prime }}(X(\cdot ,\omega),t)\) reduces to a min operator. In particular, note now that \({\text{inf}}_{t^{\prime }\in [t,t^{\prime \prime }]\cap T}\beta ^{\phi ^{\prime }}(X(\cdot ,\omega),t^{\prime })\) includes the compact time interval \([t,t^{\prime \prime }]\cap T\) instead of the open interval \((t,t^{\prime \prime })\cap T\) due to the interpretation of the until operator as the non-strict matching version. It holds that the minimum of \({\text{min}}_{t^{\prime }\in [t,t^{\prime \prime }]\cap T}\beta ^{\phi ^{\prime }}(X(\cdot ,\omega),t^{\prime })\) exists as

(1)

the minimum is over the compact time interval \([t,t^{\prime \prime }]\cap T=[t,t^{\prime \prime }]\) (recall that \(T=\mathbb {R}\)), and

(2)

the range of \(\beta ^{\phi ^{\prime }}(X(\cdot ,\omega),t)\) is restricted to \(\mathbb {B}\).

Consequently, the minimum corresponds to the infimum and it follows that \(\begin{align*} \underset{t^{\prime }\in [t,t^{\prime \prime }]\cap T}{\text{inf}}\beta ^{\phi ^{\prime }}(X(\cdot ,\omega),t^{\prime })=\underset{t^{\prime }\in [t,t^{\prime \prime }]\cap T}{\text{min}}\beta ^{\phi ^{\prime }}(X(\cdot ,\omega),t^{\prime }). \end{align*}\) Now, it holds that \({\text{min}}_{t^{\prime }\in [t,t^{\prime \prime }]\cap T}\beta ^{\phi ^{\prime }}(X(\cdot ,\omega),t^{\prime })\) is equivalent to \(\beta ^{\phi ^{\prime }}(X(\cdot ,\omega),t^{\prime })\) for some \(t^{\prime }\in [t,t^{\prime \prime }]\cap T\). Since \(\beta ^{\phi ^{\prime }}(X(\cdot ,\omega),t^{\prime })\) is measurable in \(\omega\) by the induction assumption, it follows that the function \({\text{inf}}_{t^{\prime }\in [t,t^{\prime \prime }]\cap T}\beta ^{\phi ^{\prime }}(X(\cdot ,\omega),t^{\prime })\) is measurable in \(\omega\) for a fixed \(t\in T\). Note next that the supremum operator in \(\beta ^{\phi ^{\prime } \vec{U}_I \phi ^{\prime \prime }}(X(\cdot ,\omega),t)\) reduces to a max operator due to \(I\) being compact and following a similar argument as for the infimum operator. Measurability of \(\beta ^{\phi ^{\prime } \vec{U}_I \phi ^{\prime \prime }}(X(\cdot ,\omega),t)\) in \(\omega\) for a fixed \(t\in T\) then follows as in the proof of Theorem 1. The proof for \(\beta ^{\phi ^{\prime } \vec{\underline{U}}_I \phi ^{\prime \prime }}(X(\cdot ,\omega),t)\) follows similarly.

Skip C.2Robustness Degree RDϕ (X,t) Section

C.2 Robustness Degree RDϕ (X,t)

As shown in the proof of Theorem 1, the function \(\text{RD}^\phi :\mathfrak {F}(T,\mathbb {R}^n)\times T\rightarrow \mathbb {R}^n\) is continuous and hence Borel-measurable in its first argument for a fixed \(t\in T\). By the assumption that \(X(\cdot ,\omega):\Omega \rightarrow \mathfrak {F}(T,\mathbb {R}^n)\) is Borel-measurable, the result follows trivially.

Skip C.3Robust Semantics ρϕ (X,t) Section

C.3 Robust Semantics ρϕ (X,t)

The proof follows mainly from Reference [8, Theorem 6]. However, to apply this result, we need to show that the robust semantics \(\rho ^\mu (\zeta ,t)\) of predicates \(\mu\) are continuous in \(\zeta \in \mathbb {R}^n,\) where we recall that \(\begin{align*} \rho ^{\mu }(\zeta ,t) := {\left\lbrace \begin{array}{ll} \bar{d}(\zeta ,\text{cl}(O^{\lnot \mu })) &\text{if } \zeta \in O^{\mu }\\ -\bar{d}(\zeta ,\text{cl}(O^{\mu })) &\text{otherwise.} \end{array}\right.} \end{align*}\)

Note that the functions \(\bar{d}(\zeta ,\text{cl}(O^{\lnot \mu }))\) and \(\bar{d}(\zeta ,\text{cl}(O^{\mu }))\) are continuous in \(\zeta\). This follows due to Munkres [56, Chapter 3]. By definition, we have \(\rho ^\mu (\zeta ,t)=0\) if \(\zeta \in \text{bd}(O^\mu),\) where \(\text{bd}(O^\mu)\) denotes the boundary of \(O^\mu\). Note also that \(\bar{d}(\zeta ,\text{cl}(O^{\lnot \mu }))\rightarrow 0\) as \(\zeta \rightarrow \text{bd}(O^\mu)\) as well as \(-\bar{d}(\zeta ,\text{cl}(O^{\mu }))\rightarrow 0\) as \(\zeta \rightarrow \text{bd}(O^\mu)\). It follows that \(\rho ^{\mu }(\zeta ,t)\) is continuous in \(\zeta\). The assumption that \(X(\cdot ,\omega)\) is a cadlag function for each \(\omega \in \Omega\) then enables us to apply Theorem 6 in Bartocci et al. [8].

D PROOF OF THEOREM 3

First note that \(\rho ^\phi (X(\cdot ,\omega),t)\le \text{RD}^{\phi }(X(\cdot ,\omega),t)\) for each realization \(X(\cdot ,\omega)\) of the stochastic process \(X\) with \(\omega \in \Omega\) due to Equation (5). Consequently, we have that \(-\text{RD}^{\phi }(X(\cdot ,\omega),t)\le -\rho ^\phi (X(\cdot ,\omega),t)\) for all \(\omega \in \Omega\). If \(R\) is now monotone, then it directly follows that \(R(-\text{RD}^{\phi }(X,t))\le R(-\rho ^\phi (X,t))\).

E PROOF OF PROPOSITION 1

Let us assume that \(X^1,... ,X^N\) are \(N\) independent copies of \(X\). Consequently, all \(Z^i\) contained within \(\mathcal {Z}\) are independent and identically distributed. We first recall the tight version of the Dvoretzky-Kiefer-Wolfowitz inequality as originally presented in Massart [52], which requires that \(F_Z\) is continuous.

Lemma 1.

Let \(\widehat{F}(\alpha ,\mathcal {Z})\) be based on the data \(\mathcal {Z}\) consisting of \(Z^1,... ,Z^N\), which are \(N\) independent copies of \(Z\). Let \(c\gt 0\) be a desired precision, then it holds that \(\begin{align*} P\left(\sup _\alpha |\widehat{F}(\alpha ,\mathcal {Z})-{F}_{Z}(\alpha)|\gt c\right) \le 2\exp (-2 Nc^2). \end{align*}\)

By setting \(\delta :=2\exp (-2 Nc^2)\) in Lemma 1, it holds with a probability of at least \(1-\delta\) that \(\begin{align*} \sup _\alpha |\widehat{F}(\alpha ,\mathcal {Z})-{F}_{Z}(\alpha)| \le \sqrt {\frac{\ln (2/\delta)}{2N}}. \end{align*}\)

With a probability of at least \(1-\delta\), it now holds that

as well as
Hence, it holds with a probability of at least \(1-\delta\) that
as well as
By the definition of \(\underline{VaR}_\beta (\mathcal {Z},\delta)\) and \(\overline{VaR}_\beta (\mathcal {Z},\delta)\), it holds with a probability of at least \(1-\delta\) that \(\begin{align*} \underline{VaR}_\beta (\mathcal {Z},\delta)\le VaR_\beta (Z)\le \overline{VaR}_\beta (\mathcal {Z},\delta). \end{align*}\)

F PROOF OF PROPOSITION 3

Let us again assume that \(X^1,... ,X^N\) are \(N\) independent copies of \(X\). Consequently, all \(Z^i\) contained within \(\mathcal {Z}\) are independent and identically distributed. Note first that \(\widehat{E}(\mathcal {Z})\) is a random variable with the expected value according to \(\begin{align*} E(\widehat{E}(\mathcal {Z}))&=\frac{1}{N}\sum _{i=1}^N E(Z_i)=\frac{1}{N}\sum _{i=1}^N E(Z)=E(Z). \end{align*}\) For \(c\gt 0\), we can now apply Hoeffding’s inequality and obtain the concentration inequality \(\begin{align*} P\big (|\widehat{E}(\mathcal {Z})-E(Z)|\ge c\big)&\le 2\exp \Big (-\frac{2Nc^2}{(b-a)^2}\Big). \end{align*}\) By setting \(\delta :=2\exp (-\frac{2Nc^2}{(b-a)^2})\), it holds with a probability of at least \(1-\delta\) that \(\begin{align*} |\widehat{E}(\mathcal {Z})-E(Z)| \le \sqrt {\frac{\ln (2/\delta)(b-a)^2}{2N}}. \end{align*}\) From this inequality, the result follows trivially.

G DISCRETIZATION OF c AND d IN EXAMPLE 1

To discretize the distributions of \(c\) and \(d\) in Equations (6) and (7), respectively, let \(M:=32\) be the number of desired discretization steps and \(\gamma :=0.55\) be a discretization bound. We uniformly discretize the interval \([-\gamma ,\gamma ]\) into \(M\) values \((s_1,... ,s_M)\) where \(s_m\lt s_{m+1}\). We additionally add \(s_0:=0\) and define \(S:=(s_0,s_1,... ,s_M)\). We now assign a PMF \(f_S(s_m)\) to each element \(s_m\in S\) as \(\begin{align*} f_S(s_m):={\left\lbrace \begin{array}{ll} F_\mathcal {N}(s_m) & \text{if } s_m=s_1\\ F_\mathcal {N}(s_m)- F_\mathcal {N}(s_{m-1}) & \text{if } s_1\lt s_m\lt 0\\ 2(F_\mathcal {N}(s_m)- F_\mathcal {N}(s_{m-1})) & \text{if } s_m=0\\ F_\mathcal {N}(s_{m+1})- F_\mathcal {N}(s_m) & \text{if } 0\lt s_m\lt s_M\\ 1- F_\mathcal {N}(s_m) & \text{if } s_m=s_M,\\ \end{array}\right.} \end{align*}\)

where \(F_\mathcal {N}(s)\) is the CDF of \(\mathcal {N}(0,0.2)\) (according to Equations (6) and (7)). We now assume, instead of Equations (6) and (7), that \(c\) and \(d\) take values in the sets \(\begin{align*} \mathcal {C}:=2\oplus S \times 3 \oplus S\\ \mathcal {D}:=6\oplus S \times 4 \oplus S, \end{align*}\)

where 2, 3, 6, and 4 are the mean values of \(c\) and \(d\) in Equations (6) and (7), respectively. Finally, we assume that the distributions of \(c=\begin{bmatrix}c_1 & c_2 \end{bmatrix}^T\) and \(d=\begin{bmatrix}d_1 & d_2 \end{bmatrix}^T\) are according to the PMFs \(f_c(c):=f_S(c_1)f_S(c_2)\) and \(f_d(d):=f_S(d_1)f_S(d_2)\), respectively.

Footnotes

  1. 1 We use the notation \(\mathfrak {F}(A,B)\) to denote the set of all measurable functions mapping from the domain \(A\) into the domain \(B\), i.e., an element \(f\in \mathfrak {F}(A,B)\) is a measurable function \(f:A\rightarrow B\).

    Footnote
  2. 2 We use the notation \(\oplus\) and \(\ominus\) to denote the Minkowski sum and the Minkowski difference, respectively.

    Footnote
  3. 3 The robustness degree in Fainekos and Pappas [21, Definition 7] is defined slightly differently by instead considering the signed distance of the signal x to the set of violating signals \(\mathcal {L}^{\lnot \phi }(t)\).

    Footnote
  4. 4 Particularly, this probability space is \((\mathbb {R}^n,\mathcal {B}^n,P_Z)\) where, for Borel sets \(B\in \mathcal {B}^n\), the probability measure \(P_Z:\mathcal {B}^n\rightarrow [0,1]\) is defined as \(P_Z(B):=P(Z^{-1}(B)),\) where \(Z^{-1}(B):=\lbrace \omega \in \Omega |Z(w)\in B\rbrace\) is the inverse image of \(B\) under \(Z\).

    Footnote
  5. 5 Here, we mean measurable with respect to the Borel \(\sigma\)-algebras induced by the Skorokhod metric; see Reference [8] for details.

    Footnote
  6. 6 Cadlag functions are right continuous functions with left limits.

    Footnote
  7. 7 The result for measurability of \(\rho ^\phi (X(\cdot ,\omega),t)\) is mainly taken from Reference [8, Theorem 6].

    Footnote
  8. 8 We use the shorthand notations \(P(\beta ^{\phi }(X,t)\in B)\), \(P(\rho ^{\phi }(X,t)\in B)\), and \(P(\text{RD}^{\phi }(X,t)\in B)\) instead of \(P(\lbrace \omega \in \Omega |\beta ^{\phi }(X(\cdot ,\omega),t)\in B\rbrace)\), \(P(\lbrace \omega \in \Omega |\rho ^{\phi }(X(\cdot ,\omega),t)\in B\rbrace)\), and \(P(\lbrace \omega \in \Omega |\text{RD}^{\phi }(X(\cdot ,\omega),t)\in B\rbrace)\), respectively.

    Footnote
  9. 9 In practice, it hence makes sense to select a negative value for \(a\) and to select \(b\) based on physical intuition that we may have—either from trajectories that we may have already observed or from domain knowledge, e.g., for a lane-keeping controller in autonomous driving, the value of \(b=1\) meter is a good robustness.

    Footnote
  10. 10 We can select smaller \(N\) at the cost of slightly more conservative estimates.

    Footnote
  11. 11 We restrict \(\rho ^{\phi _1}\) to lie within the interval \([-1.25,2.25]\), i.e., in this case, we clip the values of \(\rho ^{\phi _1}(X,0)=\inf _{t\in \mathbb {Z}} 2.25-|c_e(t)|\) to \(-1.25\) if \(\rho ^{\phi _1}(X,0)\lt -1.25\). In the remainder, we clip \(\rho ^{\phi _2}\)-\(\rho ^{\phi _5}\) in the same way for the specifications \(\phi _2\)-\(\phi _5\).

    Footnote

REFERENCES

  1. [1] Agha Gul and Palmskog Karl. 2018. A survey of statistical model checking. ACM Trans. Model. Comput. Simul. 28, 1 (2018), 139.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Ahmadi Mohamadreza, Xiong Xiaobin, and Ames Aaron D.. 2022. Risk-averse control via CVaR barrier functions: Application to bipedal robot locomotion. IEEE Contr. Syst. Lett. 6 (2022), 878883.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Akazaki Takumi and Hasuo Ichiro. 2015. Time robustness in MTL and expressivity in hybrid system falsification. In Proceedings of the International Conference on Computer-Aided Verification. 356374.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Anevlavis Tzanis, Philippe Matthew, Neider Daniel, and Tabuada Paulo. 2022. Being correct is not enough: Efficient verification using robust linear temporal logic. ACM Trans. Computat. Logic 23, 2 (2022), 139.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Baharisangari Nasim, Gaglione Jean-Raphaël, Neider Daniel, Topcu Ufuk, and Xu Zhe. 2021. Uncertainty-aware signal temporal logic inference. In Software Verification. Springer, 6185.Google ScholarGoogle Scholar
  6. [6] Baier Christel and Katoen Joost-Pieter. 2008. Principles of Model Checking (1st ed.). The MIT Press, Cambridge, MA. Google ScholarGoogle Scholar
  7. [7] Bartocci Ezio, Bortolussi Luca, Nenzi Laura, and Sanguinetti Guido. 2013. On the robustness of temporal properties for stochastic models. In Proceedings of the Workshop on Hybrid Systems and Biology. 319.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Bartocci Ezio, Bortolussi Luca, Nenzi Laura, and Sanguinetti Guido. 2015. System design of stochastic models using robustness of temporal properties. Theoret. Comput. Sci. 587 (2015), 325.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Bartocci Ezio, Deshmukh Jyotirmoy, Donzé Alexandre, Fainekos Georgios, Maler Oded, Ničković Dejan, and Sankaranarayanan Sriram. 2018. Specification-based monitoring of cyber-physical systems: A survey on theory, tools and applications. In Lectures on Runtime Verification. Springer, 135175.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Bharadwaj Suda, Dimitrova Rayna, and Topcu Ufuk. 2018. Synthesis of surveillance strategies via belief abstraction. In Proceedings of the Conference on Decision and Control. 41594166.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Bhat Sanjay P. and A. Prashanth L.2019. Concentration of risk measures: A Wasserstein distance approach. Proc. Conf. Neural Inf. Process. Syst. 32 (2019), 1176211771.Google ScholarGoogle Scholar
  12. [12] Board National Transportation Safety. 2019. Collision between vehicle controlled by developmental automated driving system and pedestrian. Highw. Accid. Rep. NTSB/HAR-19/03 (2019).Google ScholarGoogle Scholar
  13. [13] Brown David B.. 2007. Large deviations bounds for estimating conditional value-at-risk. Oper. Res. Lett. 35, 6 (2007), 722730.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Cassandras Christos G. and Lafortune Stephane. 2009. Introduction to Discrete Event Systems. Springer Science & Business Media.Google ScholarGoogle Scholar
  15. [15] Chapman Margaret P., Lacotte Jonathan, Tamar Aviv, Lee Donggun, Smith Kevin M., Cheng Victoria, Fisac Jaime F., Jha Susmit, Pavone Marco, and Tomlin Claire J.. 2019. A risk-sensitive finite-time reachability approach for safety of stochastic dynamic systems. In Proceedings of the American Control Conference. 29582963.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Chapman Margaret P., Lacotte Jonathan P., Smith Kevin M., Yang Insoon, Han Yuxi, Pavone Marco, and Tomlin Claire J.. 2019. Risk-sensitive safety specifications for stochastic systems using conditional value-at-risk. arXiv preprint arXiv:1909.09703 (2019).Google ScholarGoogle Scholar
  17. [17] Coulson Jeremy, Lygeros John, and Dörfler Florian. 2021. Distributionally robust chance constrained data-enabled predictive control. IEEE Trans. Automat. Contr. 67, 7 (2021).Google ScholarGoogle Scholar
  18. [18] Donzé Alexandre and Maler Oded. 2010. Robust satisfaction of temporal logic over real-valued signals. In Proceedings of the Conference on Formal Modeling and Analysis of Timed Systems. 92106.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Dosovitskiy Alexey, Ros German, Codevilla Felipe, Lopez Antonio, and Koltun Vladlen. 2017. CARLA: An open urban driving simulator. In Proceedings of the Conference on Robot Learning. 116.Google ScholarGoogle Scholar
  20. [20] Durrett Rick. 2019. Probability: Theory and Examples. Vol. 49. Cambridge University Press.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Fainekos Georgios E. and Pappas George J.. 2009. Robustness of temporal logic specifications for continuous-time signals. Theoret. Comput. Sci. 410, 42 (2009), 42624291.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Fan Chuchu, Qi Bolun, Mitra Sayan, and Viswanathan Mahesh. 2017. DryVR: Data-driven verification and compositional reasoning for automotive systems. In Proceedings of the International Conference on Computer Aided Verification. 441461.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Farahani Samira S., Majumdar Rupak, Prabhu Vinayak S., and Soudjani Sadegh. 2018. Shrinking horizon model predictive control with signal temporal logic constraints under stochastic disturbances. IEEE Trans. Automat. Contr. 64, 8 (2018), 33243331.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Furia Carlo Alberto and Rossi Matteo. 2007. On the expressiveness of MTL variants over dense time. In Proceedings of the International Conference on Formal Modeling and Analysis of Timed Systems. 163178.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Goodfellow Ian J., Shlens Jonathon, and Szegedy Christian. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).Google ScholarGoogle Scholar
  26. [26] Guide A Hitchhiker’s. 2006. Infinite Dimensional Analysis. Springer.Google ScholarGoogle Scholar
  27. [27] Guo Meng and Zavlanos Michael M.. 2018. Probabilistic motion planning under temporal tasks and soft constraints. IEEE Trans. Automat. Contr. 63, 12 (2018), 40514066.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Haghighi Iman, Mehdipour Noushin, Bartocci Ezio, and Belta Calin. 2019. Control from signal temporal logic specifications with smooth cumulative quantitative semantics. In Proceedings of the Conference on Decision and Control. 43614366.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Hyeon Eunjeong, Kim Youngki, and Stefanopoulou Anna G.. 2020. Fast risk-sensitive model predictive control for systems with time-series forecasting uncertainties. In Proceedings of the Conference on Decision and Control. 25152520.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Ivanov Radoslav, Weimer James, Alur Rajeev, Pappas George J., and Lee Insup. 2019. Verisig: Verifying safety properties of hybrid systems with neural network controllers. In Proceedings of the International Conference on Hybrid Systems: Computation and Control. 169178.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Jackson John, Laurenti Luca, Frew Eric, and Lahijanian Morteza. 2021. Formal verification of unknown dynamical systems via Gaussian process regression. arXiv preprint arXiv:2201.00655 (2021).Google ScholarGoogle Scholar
  32. [32] Jagtap Pushpak, Soudjani Sadegh, and Zamani Majid. 2018. Temporal logic verification of stochastic systems using barrier certificates. In Proceedings of the International Symposium on Automated Technology for Verification and Analysis. 177193.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Jasour Ashkan, Han Weiqiao, and Williams Brian. 2021. Real-time risk-bounded tube-based trajectory safety verification. In Proceedings of the Conference on Decision and Control. 43074313.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Jasour Ashkan, Huang Xin, Wang Allen, and Williams Brian C.. 2021. Fast nonlinear risk assessment for autonomous vehicles using learned conditional probabilistic models of agent futures. Auton. Robot (2021), 114.Google ScholarGoogle Scholar
  35. [35] Jha Susmit, Raman Vasumathi, Sadigh Dorsa, and Seshia Sanjit A.. 2018. Safe autonomy under perception uncertainty using chance-constrained temporal logic. J. Automat. Reason. 60, 1 (2018), 4362.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Kallenberg Olav. 1997. Foundations of Modern Probability. Vol. 2. Springer.Google ScholarGoogle Scholar
  37. [37] Kalogerias Dionysios S., Chamon Luiz F. O., Pappas George J., and Ribeiro Alejandro. 2020. Better safe than sorry: Risk-aware nonlinear Bayesian estimation. In Proceedings of the Conference on Acoustics, Speech and Signal Processing. 54805484.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Katz Guy, Huang Derek A., Ibeling Duligur, Julian Kyle, Lazarus Christopher, Lim Rachel, Shah Parth, Thakoor Shantanu, Wu Haoze, Zeljić Aleksandar, et al. 2019. The Marabou framework for verification and analysis of deep neural networks. In Proceedings of the International Conference on Computer Aided Verification. 443452.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Kolla Ravi Kumar, Prashanth L. A., Bhat Sanjay P., and Jagannathan Krishna. 2019. Concentration bounds for empirical conditional value-at-risk: The unbounded case. Oper. Res. Lett. 47, 1 (2019), 1620.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Kwiatkowska Marta, Norman Gethin, and Parker David. 2007. Stochastic model checking. In Proceedings of the International School on Formal Methods for the Design of Computer, Communication and Software Systems. 220270.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Kyriakis Panagiotis, Deshmukh Jyotirmoy V., and Bogdan Paul. 2019. Specification mining and robust design under uncertainty: A stochastic temporal logic approach. ACM Trans. Embed. Comput. Syst. 18, 5s (2019), 121.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Lahijanian Morteza, Andersson Sean B., and Belta Calin. 2015. Formal verification and synthesis for discrete-time stochastic systems. IEEE Trans. Automat. Contr. 60, 8 (2015), 20312045.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Legay Axel, Lukina Anna, Traonouez Louis Marie, Yang Junxing, Smolka Scott A., and Grosu Radu. 2019. Statistical model checking. In Computing and Software Science. Springer, 478504.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. [44] Levine Sergey, Finn Chelsea, Darrell Trevor, and Abbeel Pieter. 2016. End-to-end training of deep visuomotor policies. J. Mach. Learn. Res. 17, 1 (2016), 13341373.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Li Jiwei, Nuzzo Pierluigi, Sangiovanni-Vincentelli Alberto, Xi Yugeng, and Li Dewei. 2017. Stochastic contracts for cyber-physical system design under probabilistic requirements. In Proceedings of the International Conference on Formal Methods and Models for System Design. 514.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. [46] Li Xiao, DeCastro Jonathan, Vasile Cristian Ioan, Karaman Sertac, and Rus Daniela. 2022. Learning a risk-aware trajectory planner from demonstrations using logic monitor. In Proceedings of the Conference on Robot Learning. PMLR, 13261335.Google ScholarGoogle Scholar
  47. [47] Lindemann Lars, Matni Nikolai, and Pappas George J.. 2021. STL robustness risk over discrete-time stochastic processes. In Proceedings of the Conference on Decision and Control. 13291335.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Lindemann Lars, Pappas George J., and Dimarogonas Dimos V.. 2021. Reactive and risk-aware control for signal temporal logic. IEEE Trans. Automat. Contr. 67, 10 (2021).Google ScholarGoogle Scholar
  49. [49] Lindemann Lars, Robey Alexander, Jiang Lejun, Tu Stephen, and Matni Nikolai. 2021. Learning robust output control barrier functions from safe expert demonstrations. arXiv preprint arXiv:2111.09971 (2021).Google ScholarGoogle Scholar
  50. [50] Majumdar Anirudha and Pavone Marco. 2020. How should a robot assess risk? Towards an axiomatic theory of risk in robotics. In Robotics Research. Springer, 7584.Google ScholarGoogle Scholar
  51. [51] Maler Oded and Nickovic Dejan. 2004. Monitoring temporal properties of continuous signals. In Proceedings of the Conference on Formal Techniques, Modelling and Analysis of Timed and Fault-tolerant Systems. 152166.Google ScholarGoogle ScholarCross RefCross Ref
  52. [52] Massart Pascal. 1990. The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. Ann. Probab. (1990), 12691283.Google ScholarGoogle Scholar
  53. [53] Mehdipour Noushin, Vasile Cristian-Ioan, and Belta Calin. 2019. Arithmetic-geometric mean robustness for control from signal temporal logic specifications. In Proceedings of the American Control Conference. 16901695.Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Mhammedi Zakaria, Guedj Benjamin, and Williamson Robert C.. 2020. PAC-Bayesian bound for the conditional value at risk. Proc. Conf. Adv. Neural Inf. Process. Syst. 33 (2020), 1791917930.Google ScholarGoogle Scholar
  55. [55] Mnih Volodymyr, Kavukcuoglu Koray, Silver David, Rusu Andrei A., Veness Joel, Bellemare Marc G., Graves Alex, Riedmiller Martin, Fidjeland Andreas K., Ostrovski Georg, et al. 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529533.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Munkres James R.. 2000. Topology (2nd ed.). Prentice Hall.Google ScholarGoogle Scholar
  57. [57] Nikolakakis Konstantinos E., Kalogerias Dionysios S., Sheffet Or, and Sarwate Anand D.. 2021. Quantile multi-armed bandits: Optimal best-arm identification and a differentially private scheme. IEEE J. Select. Areas Inf. Theor 2, 2 (2021), 534548.Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Nyberg Truls, Pek Christian, Col Laura Dal, Norén Christoffer, and Tumova Jana. 2021. Risk-aware motion planning for autonomous vehicles with safety specifications. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV). IEEE, 10161023.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. [59] Prajna Stephen, Jadbabaie Ali, and Pappas George J.. 2007. A framework for worst-case and stochastic safety verification using barrier certificates. IEEE Trans. Automat. Contr. 52, 8 (2007), 14151428.Google ScholarGoogle ScholarCross RefCross Ref
  60. [60] Puranic Aniruddh G., Deshmukh Jyotirmoy V., and Nikolaidis Stefanos. 2021. Learning from demonstrations using signal temporal logic. In Proceedings of the Conference on Robot Learning.Google ScholarGoogle Scholar
  61. [61] Robey Alexander, Hassani Hamed, and Pappas George J.. 2020. Model-based robust deep learning: Generalizing to natural, out-of-distribution data. arXiv preprint arXiv:2005.10247 (2020).Google ScholarGoogle Scholar
  62. [62] Rockafellar R. Tyrrell and Uryasev Stanislav. 2000. Optimization of conditional value-at-risk. J. Risk 2 (2000), 2142.Google ScholarGoogle ScholarCross RefCross Ref
  63. [63] Rockafellar R. Tyrrell and Uryasev Stanislav. 2002. Conditional value-at-risk for general loss distributions. J. Bank. Fin. 26, 7 (2002), 14431471.Google ScholarGoogle ScholarCross RefCross Ref
  64. [64] Rodionova Alena, Bartocci Ezio, Nickovic Dejan, and Grosu Radu. 2016. Temporal logic as filtering. In Proceedings of the International Conference on Hybrid Systems: Computation and Control. 1120.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. [65] Ross Stéphane and Bagnell Drew. 2010. Efficient reductions for imitation learning. In Proceedings of the International Conference on Artificial Intelligence and Statistics. 661668.Google ScholarGoogle Scholar
  66. [66] Ross Stéphane, Gordon Geoffrey, and Bagnell Drew. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the International Conference on Artificial Intelligence and Statistics. 627635.Google ScholarGoogle Scholar
  67. [67] Sadigh Dorsa and Kapoor Ashish. 2016. Safe control under uncertainty with probabilistic signal temporal logic. In Proceedings of Robotics: Science and Systems XII. AnnArbor, Michigan.Google ScholarGoogle ScholarCross RefCross Ref
  68. [68] Sadraddini Sadra and Belta Calin. 2015. Robust temporal logic model predictive control. In Proceedings of the Conference on Communication, Control, and Computing. 772779.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. [69] Safaoui Sleiman, Lindemann Lars, Dimarogonas Dimos V., Shames Iman, and Summers Tyler H.. 2020. Control design for risk-based signal temporal logic specifications. IEEE Contr. Syst. Lett. 4, 4 (2020), 10001005.Google ScholarGoogle ScholarCross RefCross Ref
  70. [70] Salamati Ali, Soudjani Sadegh, and Zamani Majid. 2020. Data-driven verification under signal temporal logic constraints. IFAC-PapersOnLine 53, 2 (2020), 6974.Google ScholarGoogle ScholarCross RefCross Ref
  71. [71] Salamati Ali, Soudjani Sadegh, and Zamani Majid. 2021. Data-driven verification of stochastic linear systems with signal temporal logic constraints. Automatica 131 (2021), 109781.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. [72] Samuelson Samantha and Yang Insoon. 2018. Safety-aware optimal control of stochastic systems using conditional value-at-risk. In Proceedings of the American Control Conference. 62856290.Google ScholarGoogle ScholarCross RefCross Ref
  73. [73] Schuurmans Mathijs and Patrinos Panagiotis. 2020. Learning-based distributionally robust model predictive control of Markovian switching systems with guaranteed stability and recursive feasibility. In Proceedings of the Conference on Decision and Control. 42874292.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. [74] Silver David, Hubert Thomas, Schrittwieser Julian, Antonoglou Ioannis, Lai Matthew, Guez Arthur, Lanctot Marc, Sifre Laurent, Kumaran Dharshan, Graepel Thore, et al. 2018. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362, 6419 (2018), 11401144.Google ScholarGoogle ScholarCross RefCross Ref
  75. [75] Singh Gagandeep, Gehr Timon, Püschel Markus, and Vechev Martin. 2019. An abstract domain for certifying neural networks. Proc. ACM Program. Lang. 3 (2019), 130.Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. [76] Singh Sumeet, Chow Yinlam, Majumdar Anirudha, and Pavone Marco. 2018. A framework for time-consistent, risk-sensitive model predictive control: Theory and algorithms. IEEE Trans. Automat. Contr. 64, 7 (2018), 29052912.Google ScholarGoogle ScholarCross RefCross Ref
  77. [77] Su Jiawei, Vargas Danilo Vasconcellos, and Sakurai Kouichi. 2019. One pixel attack for fooling deep neural networks. IEEE Trans. Evolut. Computat. 23, 5 (2019), 828841.Google ScholarGoogle ScholarCross RefCross Ref
  78. [78] Szorenyi Balazs, Busa-Fekete Róbert, Weng Paul, and Hüllermeier Eyke. 2015. Qualitative multi-armed bandits: A quantile-based approach. In Proceedings of the International Conference on Machine Learning. 16601668.Google ScholarGoogle Scholar
  79. [79] Thomas Philip and Learned-Miller Erik. 2019. Concentration inequalities for conditional value at risk. In Proceedings of the International Conference on Machine Learning. 62256233.Google ScholarGoogle Scholar
  80. [80] Tiger Mattias and Heintz Fredrik. 2020. Incremental reasoning in probabilistic signal temporal logic. Int. J. Approx. Reason. 119 (2020), 325352.Google ScholarGoogle ScholarCross RefCross Ref
  81. [81] Tsiamis Anastasios, Kalogerias Dionysios S., Ribeiro Alejandro, and Pappas George J.. 2021. Linear quadratic control with risk constraints. arXiv preprint arXiv:2112.07564 (2021).Google ScholarGoogle Scholar
  82. [82] Vasile Cristian-Ioan, Leahy Kevin, Cristofalo Eric, Jones Austin, Schwager Mac, and Belta Calin. 2016. Control in belief space with temporal logic specifications. In Proceedings of the Conference on Decision and Control. 74197424.Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. [83] Wang Ying and Gao Fuqing. 2010. Deviation inequalities for an estimator of the conditional value-at-risk. Oper. Res. Lett. 38, 3 (2010), 236239.Google ScholarGoogle ScholarDigital LibraryDigital Library
  84. [84] Wang Yu, Zarei Mojtaba, Bonakdarpour Borzoo, and Pajic Miroslav. 2019. Statistical verification of hyperproperties for cyber-physical systems. ACM Trans. Embed. Comput. Syst. 18, 5s (2019), 123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. [85] Zuliani Paolo, Platzer André, and Clarke Edmund M.. 2010. Bayesian statistical model checking with application to simulink/stateflow verification. In Proceedings of the International Conference on Hybrid Systems: Computation and Control. 243252.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Risk of Stochastic Systems for Temporal Logic Specifications

                    Recommendations

                    Comments

                    Login options

                    Check if you have access through your login credentials or your institution to get full access on this article.

                    Sign in

                    Full Access

                    • Published in

                      cover image ACM Transactions on Embedded Computing Systems
                      ACM Transactions on Embedded Computing Systems  Volume 22, Issue 3
                      May 2023
                      546 pages
                      ISSN:1539-9087
                      EISSN:1558-3465
                      DOI:10.1145/3592782
                      • Editor:
                      • Tulika Mitra
                      Issue’s Table of Contents

                      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

                      Publisher

                      Association for Computing Machinery

                      New York, NY, United States

                      Publication History

                      • Published: 19 April 2023
                      • Online AM: 17 January 2023
                      • Accepted: 9 January 2023
                      • Revised: 12 December 2022
                      • Received: 10 February 2022
                      Published in tecs Volume 22, Issue 3

                      Permissions

                      Request permissions about this article.

                      Request Permissions

                      Check for updates

                      Qualifiers

                      • research-article
                    • Article Metrics

                      • Downloads (Last 12 months)254
                      • Downloads (Last 6 weeks)60

                      Other Metrics

                    PDF Format

                    View or Download as a PDF file.

                    PDF

                    eReader

                    View online with eReader.

                    eReader

                    HTML Format

                    View this article in HTML Format .

                    View HTML Format
                    About Cookies On This Site

                    We use cookies to ensure that we give you the best experience on our website.

                    Learn more

                    Got it!