RiskStructures: A design algebra for risk-aware machines

Machines, such as mobile robots and delivery drones, incorporate controllers responsible for a task while handling risk (e.g. anticipating and mitigating hazards; preventing and alleviating accidents). We refer to machines with this capability as risk-awaremachines. Risk awareness includes robustness and resilience and complicates monitoring (i.e., introspection, sensing, prediction), decision making, and control. From an engineering perspective, risk awareness adds a range of dependability requirements to system assurance. Such assurance mandates a correct-by-construction approach to controller design, based on mathematical theory.We introduce RiskStructures, an algebraic framework for risk modelling intended to support the design of safety controllers for risk-aware machines. Using the concept of a risk factor as a modelling primitive, this framework provides facilities to construct, examine, and assure these controllers.We prove desirable algebraic properties of these facilities, and demonstrate their applicability by using them to specify key aspects of safety controllers for risk-aware automated driving and collaborative robots.

1 Introduction For surgical robot assistance, the summary of safety mechanisms by Howe and Matsuoka [1999] ranges from active and passive mechanisms over independent safety monitors to supervisory control.This summary gives an impression of the number of risk factors to be handled by dedicated mechanisms in safety-critical robots.Moreover, Howe and Matsuoka already suggested that full robot autonomy requires improved sensor fusion and qualitative reasoning.Pushing this idea much further, Holland and Goodman [2003] discuss how autonomous robots might be able to internalise models of consciousness.Inspired by this direction, this work aims at the use of finite symbolic models to integrate a form of consciousness of risk in such systems, hereafter called risk awareness.To achieve this, let us first revisit some basics of the construction of failure and risk models.
Highly automated machines can be both faulty and engage in dangerous events and are expected to handle faults and dangerous events automatically.Hazards generalise the concept of faults, errors, and failures to any kind of dangerous events.The notion of risk then allows to discuss temporal and causal relationships between such events.Some hazards can be avoided by design.However, for many hazards, only the likelihood of their occurrence or the severity of their consequences can be reduced.Engineers use specific models to analyse these hazards and to achieve their reduction.
In formal verification, we identify the models we accept, the ones we do not accept, and compare models to decide which ones we accept more than others.In this work, we consider systems that can fail or engage in dangerous events.With the help of models, we study how we can handle such events.Let us further motivate this with examples.
Example 1 (Qualitative Evaluation of the Risk of Functional Failures) In a car airbag, the event "airbag release" is associated with two general functional hazards: failure on demand (i.e., action not performed when requested or when its guard is enabled) and spurious trip (i.e., action performed when not requested or when its guard is not enabled).The risk incorporated by these hazards depends on the current state of the driving process.
For each of these hazards, we can separate the driving process into situations.This step yields a table whose cells can be filled with risk information, particularly, an analysis of the probability and/or the consequences of an event occurrence: Context: Vehicle in . . .Spurious trip Failure on demand . . . of airbag: . . .manual mode 1. consequences from distraction & bag shock (irrelevant) . . .autonomous mode 2. consequences from bag shock (irrelevant) . . .collision (irrelevant) 3. consequences from crash without airbag Whereas case 1 can lead to a fatal car crash due to loss of control (i.e., by distraction) of the driving process, case 2 as a generalisation of case 1 may cause serious injuries but unlikely a fatal crash.Case 3 is different, though the loss of control is irrelevant and risk is inherited from situations without airbag.
The number of contexts or situations, functions, and hazards requires a large number of these tables to be evaluated at design time or during operation.Additionally, these tables are related according to dependencies between situations, functions, and hazards (e.g.loss of human control requires spurious trip in manual mode, risk analysis of the airbag refers to crash risk analysis without an airbag, risk analysis in manual mode is an extension of risk analysis in autonomous mode).
Example 2 (Qualitative Evaluation of the Risk of Operational Incidents) Consider a collision of a vehicle with another object.The overall risk depends on the likelihood of a collision from the current state and the possible consequences of the collision.Context: Process with . . .Near-collision Collision . . . of vehicle:

. . . following vehicle probability of collision consequences of passive collision . . . leading vehicle probability of collision consequences of rear-end collision . . . oncoming vehicle probability of collision consequences of head-on collision
Example 2 illustrates the probabilistic relationship between the two events near-collision and collision, and the relationship between operational incidents and functions, that is, the airbag as the subject of risk assessment in Example 1 is the safety function mitigating risk after collision events.Note the difference between near-collision and the three other hazards, collision, and spurious trip and failure on demand of the airbag.The three latter form hard events inasmuch as the focus lies on consequence estimation whereas near-collision can be understood as a soft event where the focus of risk assessment lies on probability estimation.

Abstractions for Machine Safety and Risk Awareness
Risk assessment of an autonomous robot encompasses • the analysis of chains of undesired events the machine can engage in, qualitatively (i.e., from a causal viewpoint) and quantitatively (e.g. from a probabilistic viewpoint), and • the analysis of what the machine is capable of (i.e., from a functional, situational, and performance viewpoint).
Autonomous robots have to handle many such event chains during operation.Hence, the tables mentioned above have to be identified and pre-filled at design time (e.g. by using qualitative risk matrices).Some of these tables will have to be continuously re-assessed at run-time (e.g. by prediction and quantitative risk assessment).Such robots need to continuously judge risk stemming from their past and planned behaviours, using introspection, estimating the current state, and predicting future states of the whole process.In the following, we will use the term process to refer to the operation of a robot in its physical environment.
The Examples 1 and 2 stimulate two questions when designing run-time mitigation measures: Which situations, functions, and hazards are there?How do we consider all these in a manageable run-time model?A run-time risk model identifies the undesirable or dangerous subset and the desirable or safe subset of the state space of the process.Knowing the dangerous subset helps assessing the measures for avoiding to reach this subset or for leaving this subset.Knowing the safe subset helps assessing the measures for not leaving this subset or for reentering this subset.If one of these sets is completely identified, we can derive the other one by set complement.However, often a risk model helps labelling only some fragments of safe and dangerous states such that a set of unlabelled states is left.Moreover, instead of a dichotomous scale (i.e., safe or dangerous), the risk model can help evaluating risk per state on Figure 1: Two abstraction levels: Simple risk model R with a single risk factor partitioning the state space of process model P and, this way, forming a view of P with respect to this specific risk factor a cardinal scale (e.g.risk level of a state as a continuous measure [Sanger, 2014]), or using fuzzy sets (i.e., degrees of membership of a state in both the safe and dangerous sets).
Starting form unlabelled states, we can first be permissive, that is, successively identify the dangerous subset (e.g. by estimating risk levels) and reduce the safe subset accordingly.Alternatively, we can successively expand the safe subset (e.g. by estimating risk levels) until we conclude unacceptable risk from our model.While both approaches occur in safety cultures, the latter can sometimes be too restrictive (i.e., the whole state space is a-priori unsafe).The following discussion will focus on the permissive approach.
Risk awareness results from the fact that an autonomous robot complies with or refines the risk model at run-time.Consequently, our main questions are: What constitutes a powerful risk model to make autonomous robots riskaware?Moreover, how can we systematically engineer a consistent and valid risk model for an autonomous robot?
To separate concerns in the modelling of autonomous robots, we consider two abstractions, the process model P and the risk model R as shown in Figure 1.
P captures the behaviours we might observe in the actual process, for example, the behaviours that are generated from a robot in its environment continuously making decisions, performing logical actions (i.e., changing the data state, act i , dotted arcs), stimulating physical actions (i.e., generating control inputs, in i , solid arcs), and producing and observing outcomes (i.e., sensing process outputs, out i , solid arcs).Actions and outcomes are the events of interest in P. Events and states (s i ) represent the observables to reason about behaviour.Uncertainty in P allows several outcomes or successor states from one action (e.g. in 1 , in 2 ) associated with parameters (Λ i ) forming probability distributions on the outcomes.A dynamical model of P (e.g. a hybrid automaton) can be used instead of an uncertainty model (e.g. a MARKOV decision process).
R abstracts from P and is comprised of a set of risk factors, each classifying P's state space into a safe region (i.e., green node signifying the desirable subset), a risky region (i.e., red node signifying the undesirable subset), and an unlabelled region (grey node).R classifies and reduces the event space (i.e., the alphabet) of P to events relevant for risk assessment, that is, endangerments (red arcs) and mitigations (green arcs).We assume to establish observational refinement or some bisimilarity between P and R. In R, we have many choices for abstraction, for example, we can focus on logical actions (act i ), process responses (out i ), control inputs (in i ), or any combination thereof.We can also craft and compare several risk models of the same process, each representing the view of a specific risk analysis.The remainder of this work will deal with a formal framework for the systematic construction of consistent and valid risk models.

Related Work
Here, we will discuss related work • in the area of algebraic methods for risk assessment, • in dependability of repairable systems, • in safety monitoring, and • in risk-sensitive control and risk-aware planning.
Algebraic Methods in System Risk Assessment.To the best knowledge of the author, this work is the first to provide an algebraic account of safety risk modelling for risk-aware autonomous systems.However, algebraic methods have been proposed for IT network security risk assessment.For example, Hamdi and Boudriga [2003] formalise the security risk management life cycle as an algebraic specification with the aim of consistency checking of particular risk analyses viewed as individual algebras.Probability of occurrence and severity of consequence are modelled as metrics over attacks (i.e., action sets) to select optimal countermeasures for attacks using multi-objective optimisation.
Although their approach focuses security risk management at design-time, their framework is inspiring for the further development of the work at hand.
Dependability Methods for Repairable Systems.Risk structures overlap in their potential use with works in the field of dependability assessment.This relationship gets visible, for example, in the approach of Unanue et al. [2018].
The authors annotate a component architecture with a fault model, synthesise a failure-repair automaton, generate a temporal fault/repair tree, synthesise minimum cut sequences from this tree, and construct an extended form of PETRI net to calculate failure probabilities from these sequences.While they focus on pure failure assessment of generic systems, risk structures generalise their approach by including severity of consequences to enable qualitative reasoning about risk rather than only about failure.
Safety Monitoring in Autonomous Systems.One of the main intentions behind risk structures is their use as active monitors and mechanisms for handling undesired events.This has been an active areas in robotics research for many years.Sobek and Chatila [1988] propose a robot architecture where a safety monitor identifies obstacles not recognised in a previous planning step.Such events trigger local corrective actions, for example, obstacle avoidance while a mitigation monitor checks for the success of a corrective action to hand over to the main planner again.Similarly, Simmons [1994] speak of deliberative components to handle normal situations whereas reactive behaviours are activated to handle exceptional situations.Guiochet et al. [2008] propose a risk model using mode transition systems where each mode represents a safety constraint and modes are partially ordered.Refining this approach, Mekki-Mokhtar et al. [2012] distinguish between safe, warning, and catastrophic states with safety trigger conditions.Based on this framework, Machin et al. [2018] apply model checking to determine whether catastrophic states can be reached.The authors also present a tree-based algorithm for the synthesis of mitigation strategies, that is, minimal sequences of control interventions to reach the nearest safe state from any warning state and fulfilling validity, permissiveness, and safety properties.While their approach is useful for the refined modelling of hazards, the presented work could enhance their framework with an algebraic method.Particularly, the work at hand allows to specify dependencies between hazard-related variables.
Based on the "monitoring oriented programming" approach [Meredith et al., 2011], Huang et al. [2014] present a monitoring infrastructure for checking trace properties (specified in different formalisms) observable from communications between modules running on the robot platform ROS.2 Their approach indicates how the risk model presented here can be implemented as a monitor using their framework.Sorin et al. [2016] propose a framework for the modelling and generation of safety monitors for ROS-based autonomous robots using if-then rules and safety action specifications.The approach is tested with an automatic vehicle in a farm environment.While their framework considers many important implementation aspects, the presented risk model could add a methodological layer to their approach, improving scalability and the formal treatment of conflicting or interfering mitigation actions.
Risk-sensitive Control and Risk-aware Planning in Autonomous Systems.Althoff et al. [2007] discusses an approach to metric reachability for linearisable dynamics.Efficiency of the reachability analyser is achieved by discretising the linearised model.The authors apply this approach to collision avoidance control in autonomous vehicles.For this, they discuss an abstraction of the discretised model into a Markov chain where the pair-wise convolution of reachable sets serves the calculation of collision risk of an ego vehicle with an oncoming vehicle.
Several applications of stochastic optimal control aim at minimising collision risk of autonomous robots and vehicles.Althoff et al. [2011] work with a probabilistic version of inevitable collision states [Fraichard and Asama, 2004] to approximate collision probability and cost beyond the planning horizon from Monte Carlo sampling of trajectories.These metrics allow the ranking of the simulated trajectories in navigation decisions.Pereira et al. [2013] experiment with a minimum expected risk planner and a risk-aware Markov decision process in autonomous underwater vehicle navigation exposed to perturbations by ocean currents.Sanger [2014] discusses a framework for risk-aware control where "movement is governed by estimates of risk based on uncertainty about the current state and knowledge of the cost of errors."The author illustrates an implementation of this framework for autonomous driving by a neuronal network.Feyzabadi and Carpin [2014] propose an efficient risk-aware path planning algorithm using constrained Markov decision processes, illustrating their approach by an autonomous indoor navigation problem.Müller and Sukhatme [2014] formalise collision risk by a Gamma distribution of the state and uncertain distance to the nearest obstacle.
For discrete planning and navigation, Shalev-Shwartz et al. [2018] define a parametric model for the investigation of collision-free autonomous driving.Their model implements driving rules according to the Duty of Care approach from Tort law, assuming proper response of all relevant traffic participants in typical driving scenarios.For efficient planning, the action space of the vehicle is discretised to solve the corresponding optimisation problem.The authors also determine the probability of sensing mistakes after applying a triple redundancy pattern to the sensor sub-system.Chen et al. [2018] discuss the prediction of rear-end collision risk.Based on the current vehicle state and the sensed environment, a KALMAN filter predicts the next state of the vehicle and its environment at the end of a monitoring interval.The predicted state represents evidence in the BAYESian network for the estimation of the collision probability.Low risk is translated into a warning, high risk into a control intervention which, however, is not focused in their work.
McCausland et al. [2013] investigate risk-driven self-organisation of a communicating collective of mobile robots acting as environment sensor nodes.Their risk model is comprised of three factors and represents a fixed node-local risk metric which is continuously evaluated.
In all these works, risk represents either a (chance) constraint not to be violated or a minimisation criterion for determining an optimal plan.Stochastic models allow the per-state estimation of expected risk.The notion of severity interval in the work at hand would add another possibly helpful uncertainty factor to such approaches and allow the per-state estimation of an expected risk range.Complementary to these control-theoretic approaches, the work at hand provides an algebra for constructing risk models (i.e., state space, value function, action space), hence, allowing reasoning about the composition of models of a variety of risks beyond collision and relating these models via refinement.This account addresses the systematic construction and verification of risk models.The mentioned control approaches suggest implementations of controllers from validated risk models.

Contributions and Overview
This work contributes to the state of the art of formal engineering methodology for highly automated safety-critical systems, particularly, for the engineering of risk-aware robots and autonomous systems, in several regards: • Humans can qualitatively identify or predict dangerous situations and take actions to avoid or recover from such situations.Based on that idea, this work approaches qualitative risk modelling (Section 3), yet making provisions to refine the presented model into quantitative risk models (Section 1.2).
• It provides a conceptual framework for risk modelling, formalises this framework, and proves algebraic properties of the proposed concepts (Sections 3 to 6), hence, going beyond the works summarised in Section 1.2.The framework further develops the ideas in [Gleirscher, 2017[Gleirscher, , 2018b]].
• Bridging the gap to works in Section 1.2, it investigates how safety of machines such as autonomous robots can be evaluated at design time and at run-time based on the proposed framework (Section 4.4).
• It establishes a relationship to other techniques conventionally and widely used in risk, failure, and accident analysis (Sections 1.2 and 5).
• It discusses several examples to motivate the choice of these concepts (Examples 1 to 9) and explains the specifics and pitfalls of the abstraction to be made by users of the framework (Remarks 4 and 5).
• Complementary to the works in Section 1.2, it characterises risk awareness as the internalisation of expressive and structured risk models by robot controllers to evaluate, predict, and memorise risk by both introspection and environment sensing (Section 6).
• It describes how risk models represent acceptance specifications (i.e., the refinement of the risk model by the process, after hiding irrelevant events, expresses risk acceptance) and run-time monitors that continuously decide whether and how the process violates safety or achieves co-safety (Section 7).
The remainder of this article is structured as follows: After an introduction to the field (Section 2) including the formal preliminaries (Section 2.3), the Sections 3 to 6 present the contribution followed by a brief discussion in Section 7.
The article concludes with Section 8. Proof details are listed in Appendix B.

Background
This section provides some background on risk modelling and assessment, failure analysis, and run-time monitoring.
It also draws a relationship to the application domain of robots and autonomous systems.

Notions of Risk
Risk can be characterised as the possibility of undesired outcomes (e.g.hazardous events and states) of an action with several alternative but uncertain outcomes [Kaplan and Garrick, 1981].In systems engineering, risk is usually assessed by measuring • the probability of hazards, • the severity of their consequences in case of their occurrence, • the probability of occurrence of these consequences (e.g. an accident) after hazards have occurred, and • the exposure of the system to these hazards [Leveson, 1995].Variations and simplifications of these measures are in use across different disciplines, application domains, and corresponding standards.For the estimation of these measures, it is necessary to understand causal relationships between events.Reasoning about causal propositions has been formalised in philosophy, mathematics, and computer science [Lewis, 1973].Probabilistic risk assessment (PRA) focuses on the use of stochastic models to quantify such propositions [Kumamoto, 2007].Apart from probabilistic models, a variety of uncertainty models and analysis techniques are used in engineering, for example, as summarised by Oberguggenberger [2015] for structural safety, the reduction of the risk of dangerous incidents in civil engineering projects.

The Risk of Undesired Events
System Accidents, Dangerous Incidents and Failures.Unfortunately, knowledge about risk mostly results from system accidents.To learn from such events, accident researchers use methods such as Ac-ciMaps [Svedung and Rasmussen, 2002].For prevention and in response to accidents, methods such as hazard operability studies (HazOp), event tree analysis (ETA), and layer of protection analysis (LOPA) allow the systematic identification and investigation of dangerous events and the derivation and assessment of interventions.Fault tree analysis (FTA) and failure mode effects analysis (FMEA) are widely used in structured analysis, assessment, and reduction of system or process failures.FTA follows a deductive scheme of causal reasoning, FMEA an inductive one.
There are many variations, extensions, combinations of these techniques, integrated into assurance approaches [McDermid, 1994], enriched with system models (e.g. the use of UML in HazOp [Guiochet et al., 2010]), coming along with intuitive visual languages, and tailored for specific applications or stages in the system life cycle [Ericson, 2015].All these techniques serve the analysis of undesired events, their causes and consequences, with the aim of reducing risk.• Reliability techniques help reducing the risk of failures (i.e., reduce fault-proneness and increase fault-tolerance) of mechanical and mechatronic systems [Birolini, 2017] • Safety techniques focus on reducing the risk of dangerous failures of control systems and software [Leveson, 1995].Both directions differ, usually overlap, and are embedded into the context of dependability engineering.
Analysis and Reduction of Random Failures.The body of scientific literature on failure analysis is overwhelming.
The following overview of techniques focuses on a small fraction of model-based and formal techniques for failure analysis at design-time.Before summarising several techniques, we will have a brief look onto the formal concepts of FTA because it is one of the most widely practised versatile techniques with many powerful extensions [Ericson, 2015].
A fault tree is a causal model of a system relating an undesired top-level event e TL with a set B of basic events using various kinds of gates (e.g.AND, OR, NOT) connecting these events.A minimum cut set (MCS) is a minimum set of events required to occur to activate e T L .Dynamic fault trees are an important extension of fault trees, allowing to model the order of events and, thus, leading to minimum cut sequences.A minimum cut sequence describes a minimum set of events that have to occur in this sequence to activate e T L .Fault trees can also be expressed by connecting specific sets MCSets ⊆ 2 B in the disjunctive normal form in the antecedent and with e T L in the consequent of the following implication: S ∈MCSets e B ∈S e B ⇒ e TL where MCSets denotes all MCSs and e B stands for a basic event.
FTA and FMEA are regularly used in combination and have been given formal semantics also based on probabilistic models [Kumamoto, 2007].For quantitative analysis, many approaches use MARKOV models [Dehlinger and Dugan, 2008], or at a higher level of abstraction, variants of stochastic PETRI nets [Papadopoulos et al., 2011].For dynamic fault trees, Kabir et al. [2018] show how fuzzy numbers can be used to integrate qualitative expert opinions on unknown failure rates into a generalised stochastic PETRI net for quantitative reliability evaluation.
Complex dynamic fault trees can also be converted into probabilistic automata for the checking of stochastic temporal properties [Boudali et al., 2007] and efficiently synthesise failure rates [Volk et al., 2016].Given a model of the system under consideration, fault trees can be synthesised from failure annotations [Unanue et al., 2018] or from counterexamples generated by model checkers [Leitner-Fischer and Leue, 2013].
Bow tie analysis (BTA) combines FTA and FMEA or ETA.Denney et al. [2017] developed a tool for modelling and managing many and large handcrafted and interrelated bow tie diagrams and for quantitative assessment.The authors explain how bow ties can guide the construction of assurance cases.Using a hierarchical control model of the process, system-theoretic process analysis (STPA) [Leveson, 2012] aims at holistic safety assessment.STPA shares with BTA and HazOp its aim to bridge the gap between failure and accident analysis.STPA's conceptual framework (called STAMP) [Leveson, 2004] promotes the light-weight and abstract modelling of control hierarchies.Variants of STPA have been proposed specifically for the analysis of accidents (CAST) [Stringfellow, 2010].Finally, why-because analysis (WBA) [Ladkin and Loer, 2001, Ch. 20] provides a conceptual, formal, and graphical framework for crosstechnology and cross-disciplinary root cause identification.Like the majority of the aforementioned techniques, WBA is primarily deductive.
Depending on the possibilities, failures can be reduced in multiple ways.Accordingly, a variety of paradigms is available.Irreducible undesired events (e.g.equipment failures, maloperation), once identified and assessed, can be drastically reduced by fault-tolerant design (e.g.[Littlewood and Rushby, 2011]) or active or passive measures (e.g.safety functions, physical separation).For example, Michalos et al. [2015] provide an overview of safety measures built into and around robots collaborating with humans in manufacturing automation.
Analysis and Reduction of Systematic Failures.As opposed to random failures, systematic failures suggest further means of reduction or even avoidance, applicable much earlier in the engineering process.
Following quality management paradigms in manufacturing, engineering can also be viewed as a stochastic process [Littlewood, 1991].Stochastic variables are formed by the circumstances (e.g.location, time, technique) under which faults are introduced and detected.Such faults are typically systematic (also called development failures [Avizienis et al., 2004]) because they have an impact on the specification and design leading to operational failures that can usually be fully reproduced as opposed to random failures.
To reduce early root causes, particularly, for systematic failures, formal methods have been proposed.The main argument for the use of such mathematically founded techniques is their power in detecting errors and inconsistencies in requirements, algorithms, and designs (e.g.invariant violation, deadlock, starvation [Roscoe, 2010, Schneider, 1999]) early before the system is built or put in operation.In the context of combining different paradigms, fault trees (see page 7) are often used as a feedback tool for incremental design improvement.For example, Hansen et al. [1998] discuss how formal fault trees can be used to derive safety requirements to validate the software design during its step-wise development.Because the introduction of formal methods is difficult and proposed methods occurred to be inadequate, Bowen and Stavridou [1993] investigated the applicability of formal methods in safety-critical software engineering.Most importantly, the authors pointed out a typical problem of formal methods, the difficulty of safety provisions and measurements when concerned with a degree of safety or safety as risk instead of being an invariant not to violate.
Analysis and Mitigation at Run-time.One striking advantage of using model-based techniques, particularly, formal methods, is the possibility of using property specifications and models at run-time.For example, in run-time verification or monitoring [Leucker and Schallhart, 2009], properties are checked during system operation by recording observation traces (e.g.values of observed variables) and checking these traces for violations (safety) or for acceptance (co-safety).The checking task is performed by independent components, sometimes called watchdogs, safety monitors, or policing functions [Bogdiukiewicz et al., 2017].Monitoring can be used to derive probabilistic statements about system health at any time during operation, for example, using BAYESian networks [Iamsumang et al., 2018].

Formal Preliminaries
This section prepares the preliminaries for the formalisation in the later sections.
Processes.We use communicating sequential processes (CSP) [Hoare, 1985] for system modelling.Let Σ be the set of all (labels for) events called the alphabet of a process.We distinguish two special events: signifies the successful termination of a process and τ signifies the invisible event resulting from abstraction (i.e., hiding or constrained observation).We require , τ Σ and define Definition 1 (Process) A process is an expression of the following form: where a ∈ Σ and A ⊆ Σ.Let P be the set of all (labels for) processes (or, equivalently, their control states) with SKIP, STOP ∈ P.
We attach meaning to each expression P ∈ P according to Definition 1 by providing a recursive scheme denoting the behaviour of P.Among several ways of doing this, we focus on the traces model T .Traces are finite sequences over Σ abstracting from details (e.g.internal state) of P's executions or behaviours [Schneider, 1999]: "a trace is a record of the visible events of an execution."In T , we use a function traces(P) to obtain the trace semantics of P. For example, we have traces(STOP) = { }.The event represents the counterpart of SKIP in T as denoted by traces(SKIP) = { , }. initials(P) = {a | a ∈ traces(P)} denotes the set of all initial events of P. If we can establish the relation traces(P) ⊆ traces(Q) for two processes P and Q, we write P ⊑ T Q and say that Q refines P (or P is refined by Q).
With Definition 1, we can model observable behaviour of systems and refer to distinct portions of such behaviour using control state labels.For a comprehensive account of CSP and a hierarchy of CSP models, the inclined reader may consult [Roscoe, 2010].
Transition Systems.For the investigation of the operational semantics of both risk models and CSP, labelled transition systems (LTS, Baier and Katoen [2008]) will be defined along with notational conventions.
Definition 2 (Labelled Transition System, LTS) A LTS is a tuple S = (S, E, →, S 0 ) with a set S of states, a set E of events, a relation → ⊆ S × E × S representing transitions between these states when engaging in these events, and a set S 0 of initial states.
runs(s, S) denotes the runs (i.e., state/event traces) observable from the process modelled by S in state s.We write and s if e ∈ E, s ′ ∈ S : (s, e, s ′ ) ∈ →.The omission of event labels and initial states leads to the more general form (S, →) with → ⊆ S × S, called transition system (TS).
Temporal Logic.For the investigation of constraints on risk models, we employ linear-time temporal logic [Manna and Pnueli, 1991, TL].For TL formulae φ, ψ, a run ρ ∈ runs(s, S), and the truth constants T and F, • the operator •φ expresses that φ holds of the next state of ρ and • φ U ψ expresses that φ holds of every state until ψ holds of a state, with ψ required to eventually hold.• For convenience, φ = φ U F denotes that φ holds of every state of a whole run, • φ = T U φ expresses that φ holds eventually or for some future state, and • φ W ψ = φ U ψ ∨ φ allows ψ to never actually hold.• For a last state of any run prefix, φ denotes that φ has held before or in the past represented by this prefix.TL formulas can be interpreted for events in the same way.In timed extensions of TL [Koymans, 1990], we allow time constraints of the form "∼t" to be attached to some of these operators, with ∼ ∈ {<, >, ≤, ≥}.For example, < t φ expresses that φ holds for some future state before t time units will have elapsed.Events in runs can be treated in a similar way and the satisfaction relation can be extended to sets of runs of S. A comprehensive treatment is provided in [Baier and Katoen, 2008].

Risk Elements
This section introduces a model to capture beliefs about risk and risk causality based on a process P (Definition 1).It is not unusual to consider one part S of P as the "system under consideration" interacting with another part E of P called "environment" or "context".We may consider two settings: P = S A E with a set of shared events A, or where E is a term using S.
Based on the discussion in Section 2.1, we view risk as the possibility4 of undesired states (i) reachable by P, (ii) finitely causal5 , and (iii) not entirely avoidable in P. Undesired states both result from undesired events and may cause or at least increase the risk of further undesired events, hence, forming causal chains of events.As highlighted in Section 1, in our risk models, we concentrate on undesired fractions of uncertain outcomes of process actions, that is, observable events changing the level of risk from process state to process state.Let us consider further examples before we formalise these concepts.
Example 3 (Road Traffic and Brakes) Collisions are examples of undesired events, accidents resulting in undesired states entailing human injury and damaged property.Even near-collisions are undesired events, by definition posing the risk of actual collisions.By backward reasoning, for example, an observable loss of a car's braking function constitutes an undesired event leading to an undesired state of the car because of operating without functioning brakes.Clearly, such a vehicle is in a riskier state than with functioning brakes.
Example 3 highlights why the state bi-partition in Figure 1 from the viewpoint of a single risk factor is too coarse.Failures of a brake or an anti-lock braking system are not repairable at run-time, in other words, direct mitigation is not possible.Hence, Figure 2 introduces a third partition as described below.Moreover, a failure of the brakes restricts the direct mitigation of near-collisions (Example 2) by braking.This example also shows how two risk factors are related.
Example 4 (Autonomous Vehicles and Brakes) There is a difference between human-operated cars, where human operators might be aware of missing brakes, and autonomous vehicles (AVs) where being aware is left to the automation.In the former type of cars, alert human operators will try to react.Independent of human capabilities, what matters is that they get aware and are given the chance to take responsibility of emergency control.However, AVs are left alone in such a situation, unlikely will their vendors manage to legitimately push away such responsibility.We might expect from AVs to implement at least as much responsibility as society would have expected from qualified human drivers in the corresponding driving situations.
Example 4 motivates what safety engineers can do to equip autonomous machines with the ability to run highly specific mitigation mechanisms in risky states, to develop beliefs about past operations and to predict risk in future operations of a machine and certify that such mechanisms actually improve safety.

Risk Factors
Risk factors form the basic elements of the approach to risk modelling as discussed in the following.We define the notion of a risk factor using an LTS, describe its properties and meaning, provide a translation into CSP, and discuss an algebra of risk factors.
Definition 3 (Risk Factor) Let (Ph, Σ f , →) be a LTS according to Definition 2. Extending this LTS, a risk factor is a tuple • a partial order f ⊆ Ph 2 , and for the severity of the least and worst expected consequences of f.6 We call c the severity (interval) of f.Let F be the set of all risk factors in the remainder of this work.
We only consider risk factors with finite Ph and Σ f .Furthermore, we regard risk factors f with → according to Figure 2 with Ph = {0 f , f, f} for the phases inactive (0 f , typically, the initial and desired phase of f), active (f ), and mitigated (f), where f is at least7 the reflexive transitive closure of {(f, 0 f ), (f, f)}, and with The labels in Figure 2 indicate the meanings of these events.Figure 2b describes how these events can be used for modelling risk factors.The three state preserving events can complement the endangerment and mitigation events.By definition, if e = then (p, e, p ′ ) → for any p, p ′ ∈ Ph.We can now distinguish further kinds of risk factors: • For any reducible f with m f , if e f ⊂ e f then we call f strongly reducible.
• If m f d = then we call f indirectly reducible.The type of risk factor described in Figure 2a is minimal inasmuch as it comprises the minimal set of elements of generic risk atoms.However, phases other than the ones shown can be distinguished in specific applications.
Abstractions underlying Events.Events in the CSP interpretation are atomic observations [Roscoe, 2010, Ch. 1.5].The initiation and termination of events representing complex enduring real-world phenomena are to be viewed as nonseparable aspects of these events.Consequently, much care is necessary when making assumptions about the atomicity of such events interfering with other events.Hence, it is reasonable to use the types of events listed for risk factors to model the initiation, termination, or other significant events of the corresponding sub-processes (i.e., endangerment and mitigation processes) in the process P.
Example 5 (Final Risk Factors) Road accidents as well as nuclear power-plant accidents form courses of events.Severe human injury or loss and machine, environmental, and property damage typically happen during such accidents.If required, we can model such injury or damage as final risk factors and, thus, can stop to discuss possible mitigations.This way, final risk factors define the scope of a risk model.
Viewing damage or injury as risk factors allows their treatment within the same framework.We might later introduce mitigations and convert a final into a reducible risk factor.Airbags as a mitigation of certain types of human injury for certain types of collisions represent a historical example.
Example 6 (Strongly Reducible Risk Factors) We instantiate the LTS pattern given in Figure 2: Consider the event e db (instance of e f ) that a car's braking function degrades resulting in an operational state db of the car where any further use of its brakes (o db e ) will likely differ from the expectation of a human operator when trying to reduce speed in a typical driving situation.One conservative mitigation (m f ) would be to drive by and halt as safely as possible and, thus, reach a state db from which only a strict subset (i.e., e db ⊂ e db ) of the original endangerment event can be observed.Hence, we call db a strongly reducible risk factor.
Example 7 (Indirectly Reducible Risk Factors) If our application requires an intermediate stable state f for the mitigation of a risk factor before returning to the inactive phase 0 f , we speak of indirectly reducible risk factors.
A leaking or damaged battery of an AV would be such a case as well as an aircraft running out of fuel.If we do not want to consider ways to fuel such machines during operation but by reaching what is typically called a "safe state," we might model such situations by indirectly reducible risk factors.The mitigated phase would represent the "safe state" with respect to this risk factor.This way, our model captures how to reach this phase.In our example, this can happen by the atomic events successfully halting at the next car repair shop and successful accomplishment of an emergency landing, respectively.From this phase, we can recover (m f r ) to the inactive phase 0 f .Example 8 (Directly Reducible Risk Factors) Driving too close to a front vehicle is a risk factor that in many situations can be dealt with by braking correspondingly and, thus, resulting in a state where this risk factor is inactive again.The described braking event can be accounted as an event in m f d .We call this a direct mitigation.
Risk Factors in CSP.Given p τ for any p ∈ Ph, the risk factor f from Figure 2 can be represented as a sequential, mutually recursive CSP process R f (Definition 1) as follows: then the general choice gets an external choice and R f is deterministic, otherwise R f is nondeterministic.This mapping enables an algebraic treatment of risk factors in CSP.Later, in Section 6, we will use a map for a factor f according to Definition 3 and initialised with the phase 0 f .Note the use of f as a symbol for the risk factor as a transition system (Figure 2a) and the use of f for the CSP process that models this risk factor in its active phase.Different fonts signify the semantic difference.
Remark 1 Risk factors can be used to model risky fractions of or propositions about a process and its behaviour.For example, final risk factors can be used to model, for example, permanent and off-line repairable faults, and reducible risk factors serve the modelling of, for example, transient and on-line repairable faults.This model only allows to talk of risks identified as risk factors and cannot be used to reason about "absolute safety."This epistemic limit is inherent to (risk) modelling and can only be dealt with from the outside of the framework.
Remark 2 The notions of systematic and random faults [Birolini, 2017] can be represented as follows: A systematic fault can be seen as an observable undesired event whose preconditions or causes (i.e., 0 f , e f ) can be predicted, reconstructed, reproduced, or otherwise deduced, identified, and sufficiently determined.Hence, each (class of) systematic fault(s) can be associated with a deterministic risk factor.
A random fault can be seen as an observable undesired event whose preconditions or causes are only partially known or even unknown.One way to represent this lack of knowledge by a risk factor is to use nondeterminism:8 From 0 f , we allow rnd = o f n ∩ e f .rnd represents potential but incomplete causes of f.Note that this choice makes sense inasmuch as o f n ⊇ e f denotes that we know f but the least possible, namely nothing, about its causes.Having observers9 for events and phases, the risk factor would form a model of a process P where an observation of an event of P in rnd is sometimes followed by an observation of the phase 0 f and sometimes by an observation of the phase f .f in Figure 3 is a generalisation of f in Figure 2a (i.e., f (3) ⊑ T f (2a) ).Each observable event of P that belongs to rnd 0 (rnd for f, rnd for f) is leading to an anonymous phase (•) succeeded by a τ event representing internal choice.For mitigation events m f d and m f to be observed from the active phase f , uncertainty is modelled by the set rnd, again to be followed by either of the three phases of the risk factor depending on what can be observed in P.

Risk Spaces
Risk factors give rise to further concepts: risk states, risk spaces, mitigation orders, and risk structures.Let n ∈ N and .n]} ⊂ F be a finite set of risk factors (Definition 3) where Definition 4 (Risk State) Assume that risk factors are unique, that is, i j ⇒ f i f j ∧ Ph i ∩ Ph j = .Then, a risk state is a faithful total injection σ : Observe that from Figure 2 follows ∀f, g ∈ F : Σ f = Σ g , that is, all risk factors correspond to processes with the same alphabet, call it Σ F .This has no influence on risk space composition as described below.However, from this follows that the corresponding CSP processes are composed in parallel synchronously by such that the process underlying each risk factor is always ready to agree with some event of the environment.10This construction guarantees deadlock freedom.
A risk state abstracts from states of a process P (Section 2.3) by focusing on risk-related information in form of state propositions associated with the phases of the risk factors.
Definition 5 (Risk Space) For a set of risk factors F, a risk space R(F) is the function space given by We omit the parameter F from R if it is clear from the context and denote the set of all risk spaces by R.
Let phase : R × F → f : F Ph f be a map yielding the current phase of a risk factor f in a risk state σ.The infix operator scheme • | F ′ : R(F)×F → R(F ′ ) describes a projection from the risk space R(F) to the risk space R(F ′ ) where F ′ ⊆ F.
We allow the convention σ (i) = σ(f i ) = phase(σ, f i ) when referring to the phase of the risk factor f i .We can view R as a set of ordered n-tuples (formed by the Cartesian product of Ph i after fixing some linear order over F) or as a set of sets (formed by all equivalence classes of phase permutations over F).These views permit the treatment of σ ∈ R as an unordered tuple or index set.Particularly, for i j, the tuples (σ (i) , σ (j) ) ∈ Ph i × Ph j and (σ (j) , σ (i) ) ∈ Ph j × Ph i identify exactly the same risk state.Consequently, (σ (i) , σ (i) ) = (phase(σ, f i ), phase(σ, f i )) collapses to σ (i) .In the following, one of the two views of R will occasionally be more convenient for the discussion.
Remark 3 By Definition 5, R is non-empty and finite if and only if F is non-empty and finite.R defines the set of all states an arbitrary11 combination of phases of risk factors might give rise to.Given a complete set of risk factors identified by the risk analyst for a specific application, only a small subset of R might eventually be relevant for the machine in operation.In general, we assume that identifying this subset is difficult.Moreover, the relevance of a risk state can be seen as a gradual quantity determined at run-time based on its context in R and the process state.
Definition 6 (Equality and Compatibility of Risk States) Two risk states σ, σ ′ ∈ R are equal, written σ = σ ′ , if and only if their corresponding phases are equal, formally, ∀i ∈ [1.
Definition 7 (Risk Space Composition) Then, the composition ⊗ : R × R → R of the two risk spaces R(F 1 ) and R(F 2 ) is defined by An analogous constraint is used for parallel composition of risk structures below in Formula (10).Now, we can derive a basic law relating the union of risk factors and the composition of risk spaces.Furthermore, it will turn out that R is a homomorphism.
Lemma 1 (Exchange of ∪ and ⊗) Proof 1 (Proof Sketch.)The proof is by mutual existence and uniqueness: ) and (ii) this pair is unique, and (iii, iv) vice versa.Details on the proof can be taken from Proof 20.
Lemma 2 (Homomorphism) R is a homomorphism in the context of (F, ∪) and (R, ⊗).
We first make sure that we actually deal with semi-groups and then show by algebraic manipulation that ⊗ is associative.Details on the proof can be taken from Proof 21.
Special Risk States.We close this section with an analysis of specific classes of risk states.R( ) forms the empty risk space and R({f}) the trivial risk space for f, leading to the following law and equality: Corollary 1 For any finite F ⊂ F, R( ) is the zero element of risk space composition with ⊗: This notion is different from control stability where the plant has reached a stable state and from CSP's stable failures model F where stability refers to control states without invisible internal events (i.e., τ) and waiting for input.States with an active final risk factor f (Section 3.1) take up a particularly bad role: Such states, by definition, would expose the process P to residual risk infinitely long and often, thus, making any harmful consequences associated with f very likely.On the one hand, such states, useful in modelling bad accidents, are inevitable in any realistic risk model and, on the other hand, a process P should govern its (probabilistic) choices in order to not enter such states.

Mitigation Orders
This section investigates various basic orders over risk spaces depending on the qualitative and quantitative information available in the risk model.

Qualitative Mitigation Orders
Let R(F) be a risk space for a set of n risk factors F ⊆ F according to Definition 5. Again, we assume all risk factors are given indices in the range 1..n.We use the convention of page 13 to refer to parts of risk states by σ (i) with i ∈ [1..n] and define a partial order m ⊆ R × R as follows.
Definition 9 (Fully Comparable Inclusive Mitigation Order) For any states σ, σ ′ ∈ R, define By σ ≺ m σ ′ ⇐⇒ σ m σ ′ ∧ σ m σ ′ , we induce the corresponding strict order.σ and σ ′ are said to be incomparable if and only if σ m σ ′ ∧ σ ′ m σ.Intuitively, σ m σ ′ signifies that "σ ′ is a better achievement in risk mitigation than σ."However, note that m requires full comparability of two states.It might be cumbersome to require such comprehensive knowledge to determine which state is "better or less risky" than another state.Hence, in presence of irreducible (i.e., aleatory) uncertainty, we might instead want to account for the partial knowledge in the orders of risk factors' phases (Definition 3) at the level of R by providing a relaxed partial order as follows.
Definition 10 (Partially Comparable Inclusive Mitigation Order) For states σ, σ ′ ∈ R, define We use ≺ ∼ m and = ∼ m to distinguish the corresponding strict order and equality for m from ≺ m .Intuitively, Definition 10 requires a "betterment in risk from σ to σ ′ " based exactly on the comparable phases.

Quantitative Mitigation Orders
So far, we have seen how partial orders account for a lack of knowledge and potential uncertainties about risk.Now, we will have a look at how we can use severity information, if available for specific risk factors, to interpolate knowledge gaps, model uncertainty, and derive a linear order over R.
We continue with further definitions to deal with intervals: Given a family of intervals Furthermore, let active : R(F) → 2 F with active(σ) = {f ∈ F | σ(f) = f } be the map returning the set of active factors of a risk state.Moreover, let S : R → R 2 + with S(σ) = (s f ) * f∈active(σ) be a map for the construction of the severity interval of a risk state.12Whereas the minimum severity of a risk factor f is given by the interval [0, 0), the minimum severity of a risk state S(σ) is the empty set .Moreover, we say that two risk states σ, σ ′ ∈ R are severity-equivalent if and only if their accumulated severity intervals are equal, that is, σ ∼ s σ ′ ⇐⇒ S(σ) = S(σ ′ ).We have that σ = σ ′ ⇒ σ ∼ s σ ′ because the factors that are in their active phases are identical.The relation ∼ s is an equivalence relation because it is reflexive, symmetric, and transitive (all by the usual equivalence over intervals).Furthermore, ∼ s induces equivalence classes .n] , we now define an order over R/∼ s .

Definition 11 (Strong Mitigation Order) For
(3) ≤ m codifies the circumstance that the risk state σ ′ is "better" if the union of its severity intervals is (a) lower in the ranking ≤ of interval numbers or (b) narrower than the corresponding union for σ.
Condition (b) conveys the intuition that the interval carries less uncertainty about the consequences expected from σ ′ than from σ. Equivalence classes in R/∼ s abstract from the risk factors from which the merged severity intervals originate.This abstraction has to be carefully taken into account when using ≤ m and, therefore, when specifying severity.Note that ≤ m is based on the convex hull of severity intervals from the the active phases of a pair of risk states.Apart from the convex hull, interval addition and multiplication are relevant for alternative mitigation orders as we shall see below.However, a detailed investigation is left for future work.Let us now consider some core properties of ≤ m .
Lemma 4 ≤ m is linear over R/∼ s .
Proof 6 (Proof Sketch.)We show by case analysis that any two risk states are comparable and ≤ m is antisymmetric.The complete proof is stated in Appendix B.
Corollary 4 After dropping Lemma 4 by Formula (3), we have that ≤ m is also linear over R.
Remark 4 (Method of Abstraction) Severity intervals abstract from the potential consequences of risk factors.The family (s i ) i ∈[1.
.n] forms a cut of causal chains and, hence, defines the scope of the risk model.This abstraction is left to the modeller (i.e., the risk analyst or safety engineer) and can vary significantly.Note, two different risk states σ, σ ′ ∈ R (i.e., σ σ ′ ) can well be severity-equivalent (i.e., σ ∼ s σ ′ ).Hence, the consequences of several activated risk factors should be compatible in the sense that the convex hull of the intervals of these factors maintains a consistent meaning of severity in (R, ≤ m ).
We will return to abstraction and compatibility of risk factors below in Section 5 and instead continue here with the further investigation of ≤ m .
Proof 7 Lemma 5 follows from a finite R (by definition) and linearity of ≤ m (by Lemma 4).
Definition 12 For σ ∈ R(F), we also write 0 F ≡ ∀f ∈ F : σ(f) = 0 f and F ≡ ∀f ∈ F : σ(f) = f .We denote by ⊤ F the set of maximal elements and by ⊥ F the set of minimum elements of (R, m , m , ≤ m ).We characterise the minimal elements in (R(F), m ) by σ} and analogously for (R, m ) and (R, ≤ m ) and the maximal elements.
and ⊤ F are non-empty and, therefore, have a proper manifestation.For (R/∼ s , ≤ m ), ⊥ F and ⊤ F are singletons.
Proof 8 (Proof of Corollary 5.) The proof is by contradiction.For the sake of brevity, we only consider a sketch of this proof.Assume we have two state classes Because of our assumption, state classes are in linear order.Thus, by definition of ⊥ F , one of these state classes causes a violation of the universal quantification in Definition 12 and, therefore, one of the classes cannot be in ⊥ F which contradicts our assumption.Analogously for ⊤ F .
Proof 9 (Proof of Corollary 6.) Linearity of ≤ m implies that every non-empty subset of R/∼ s has a greatest lower bound and a least upper bound.

Relating Mitigation Orders
Note that the strong mitigation order characterised by the Lemmas 4 and 5 is driven by the number of active risk factors and their severity intervals (because of the definition of S) but not by the equality of phases of risk factors among the compared risk states.This offers the possibility of abstraction from individual risk factors and of focusing on severity estimates.To avoid infeasible models (e.g.specifications that get too strong to be realisable), we require that the addition of severity intervals constitutes a relational extension of either m or m , formally, for full and partial comparability, otherwise implying Intuitively, if σ is "worse" than σ ′′ then its accumulated severity interval S(σ) has to be greater than that of σ ′′ and, therefore, must not be contained in that of σ ′′ .Moreover, if σ and σ ′′ are incomparable in m and m (i.e., some risk factors have inversely ordered or incomparable phases) then S(σ) and S(σ ′′ ) are allowed to form any relationship (signified by T for "true"), for example, Formula (4).
So, what is the (necessary and) sufficient condition on F to satisfy the requirement expressed by Formula (4)?Risk spaces and risk state pairs are the interpretations and, therefore, potential models satisfying the relational extension imposed by Formula (4).The preparation of an answer to this question suggests the following lemma: Proof 10 (Proof Sketch.)The proof is by induction over F and relies on the assumption that, for any f ∈ F, f is the unique maximal element in f and that active only returns such elements.The whole proof is stated in Proof 23.
Again, fix a finite F and a pair σ, σ ′ ∈ R(F) and assume σ m σ ′ ∨ σ m σ ′ .Then, by Lemma 6, σ ′ incorporates a subset of active risk factors of σ.S(σ ′ ) maintains the right-hand part (i.e., S(σ ′ ) ⊂ S(σ)) of the consequent of Formula (4) in all of the following cases: 1. no intervals are excluded (σ ′ = σ), 2. only intervals included in the others are excluded, 3. intervals only increasing the lower bound are excluded, 4. intervals only decreasing the upper bound are excluded, and 5. intervals increasing the lower bound and decreasing the upper bound are excluded.
As a side note, Formula (4) is also satisfied if all factors in F are assigned the same interval c 0 .In conclusion, the sufficient condition on F to satisfy Formula (4) is the "unique maximal element" precondition in the proof of Lemma 6. Apart from this precondition, Formula (4) holds of an arbitrary finite F ⊆ F. Below, we shall call m and m inclusive mitigation orders, and ≤ m a strong mitigation order.
Theorem 1 The strong mitigation order ≤ m extends the partially comparable inclusive mitigation order m which, in turn, extends the fully comparable inclusive mitigation order m .Formally, for σ, σ ′ ∈ R: Remark 5 Discrete and linear mitigation orders such as ≤ m promote machine implementations with negative utilitarian decision ethics [Warburton, 2012, p. 51].For example, severity intervals could be calculated at run-time based on sensor data about the possible operational situation of a system.The expected outcomes of all enabled mitigation actions, if any, will then be comparable according to the resulting ≤ m .This comparability allows the assessment of the actual reachability of states with strictly lower risk.Any resolution of a near-accident situation (Example 3) or a tram problem [Foot, 1978] (also known as the "trolley problem") would then consist in the choice of the mitigation action leading to the state with the lowest risk or the least severe of the expected negative outcomes.This scheme characterises negative utilitarianism.
Linear orders globally resolve decisions based on explicit and, therefore, disputable criteria.Consequently, utilitarian ethics have been criticised to lead to oversimplified approaches to resolve indecision.According to Warburton [2012, p. 48f], such critiques stress the difficulty of predicting the positive and negative effects of certain actions, in our case, the calculation of the severity of consequences of an activated risk factor.
Fortunately, a structured risk model with dependencies between risk factors could be used to complement utilitarian decision ethics with KANTian ethics, that is, to take conservative measures before high-severity risk factors get activated.For example, we can model the necessary preconditions of the tram problem as risk factors and a machine based on this model can use these factors to constrain its behaviour.
Overall, although the presented model can be used with linear orders, the core discussions below try to stay agnostic of the mitigation order.

Local, Regional, and Global Safety
The orders m , m , and ≤ m are local in the sense that their definitions only require the comparison of pairs of risk states.Two more qualitative notions of safety seem to be useful.
Let R be non-empty and finite and reach : R × P → 2 R .Given a process P ∈ P and a risk state σ ∈ R, reach(σ, P) ⊆ R denotes the set of risk states reachable from σ by P where σ itself is always reachable and, thus, σ ∈ reach(σ, P).
Then, we use m to determine two non-empty sets of minimum and maximal elements in R reachable in P, namely max m {reach(σ, P)} and min m {reach(σ, P)}.
In a situation represented by the process P in a specific risk state σ, these two sets signify the regionally safest (max) and the regionally most hazardous (min) states, respectively.The smallest such set will only and exactly contain σ, representing "the situation where P cannot do anything further about risk."This way, m yields a regional notion of safety.The notion is regional inasmuch as once a maximal element in max m {reach(σ, P)} is reached, the risk model allows no more reasoning about safer states that P could reach from σ instead of maintaining its current risk state.
In addition to local and regional safety, (R/∼ s , ≤ m ) yields a more global notion because of its linearity (Lemma 4).The ≤ m -based risk model is global in the sense that there are always unique safest and riskiest states in R/∼ s (Corollaries 5 and 6) among all risk states reachable by P from σ ∈ [σ] ∼ s .In contrast, the m -based risk model will not guarantee uniqueness of the globally safest state with respect to the reachable set of risk states.Overall, Lemma 5 provides a necessary condition for deriving strategies (i.e., policies or choice resolutions) that stabilise or terminate P in these globally safest regions.Note that the use of equivalence classes leads to more abstract forms of safest and riskiest states.
These abstract min/max-bounded reachable sets can be used to assess risk at run-time, particularly, by estimating the probability of occurrence and the severity of consequence from specific situational data only available during operation and, then, by accumulating these estimations according to the given risk model.Safety of a process then turns into the gradual presence and absence of the risk of undesired events during operation, during a system run, or, more formally, for subsets of traces(P).This notion complements safety as the crisp presence and absence of undesired events, as the avoidance of risk factors (e.g.undesired events), and as the reduction of the probability of occurrence or the severity of consequences of undesired events.As a side note, the reliability of a process as the gradual presence and absence of defective behaviour of this process can be seen as a special case of the presented approach when considering only risk factors that model system faults.

Dependencies between Risk Factors
In risk analysis, we often wish to model causal relationships between (phases of) risk factors and, consequently, risk states.For example, we might want to model that 1. the activation of one risk factor causes the activation of another risk factor, 2. the mitigation of one risk factor causes the activation of another risk factor, or 3. the activation of one risk factor requires the activation of another risk factor.
Example 9 Consider the following example illustrating these relationships: 1.For example, water or oil on a robot's fingers (f 1 ) cause the robot's hand to be slippery (f 2 ) such that holding a heavy object gets an action with an increased likelihood of a negative outcome.
2. An increased grabbing pressure (f 3 ) as a result to mitigate the object slipping out of the grabber's hold could potentially cause damage to the object.This negative outcome can be modelled as a final risk factor f 4 .
3. A damage to the object (f 4 ) requires at least one of high grabbing pressure (f 3 ) or-applying backward identification of further risk factors-the object's high falling onto a hard surface (f 5 ).This fall (f 5 ) again requires at least one of slippery fingers (f 1 ) or a mistaken loosening of the grabber (f 6 ).

Relations over Risk Spaces
We can take account of relationships, such as illustrated in Example 9, by imposing constraints on pairs of risk states and their comprising factors' phases.For this, we use binary relations over risk spaces R to approximate causality assumptions about parts of a process P less known or less under control (typically, some kind of environment E) and causality requirements to be imposed on parts of P more known or more under control.In the following, we employ temporal logic (Section 2.3) and then relational specification to formalise constraints.
In the following, let i, j ∈ [1.
.n] with i j.Now, consider two distinct risk factors f i , f j ∈ F. For example, the causes (or trigger) constraint13 can be defined as follows: Note that in our model we have ¬f i ⇔ f i ∨ 0 f i .Formula (6) requires that for any path through R from a state in R 0 ⊆ R and at any step of this path, if f i is active then, within at most t time units, f j must be active until f i gets either inactive or mitigated.f j can stay active forever and may already be active before the activation of f i .
Formula ( 7) forms a simplification of Formula (6) taking into account the (time) abstraction used in our definition of R.This abstraction implies that the step of an inactive risk factor to its activated phase is a logical time step, though, assuming a real-time duration of > 0.
Let C be the set of all constraints.For the translation of TL constraints as described above into a form usable by parallel composition, we use a map [[.]] c : C → 2 R×R to denote the relational semantics of constraints over R. We will see in the following how applying a constraint c ∈ C to a risk structure R can restrict the transition relation →.
For example, causes constraints can be encoded as relations over R as follows: Note that all pairs (σ, σ ′ ) violating the antecedent of the conditional are in [[f i causes f j ]] c as well.Furthermore, this specific constraint allows immediate or weak causation as well as delayed or strong causation of at most one transition (that is, one "logical" step) in R.
The requires constraint can be written in TL form as follows: and in relational form as follows: The lifting of requires to sets F, F ′ ⊆ F with F ∩ F ′ = is described as follows: Note that this variant of the requires constraint refers to all factors specified on its left-hand side and, this way, resembles an AND-gate as used in FTA.The side condition F ∩ F ′ = rules out the case that a risk factor requires or causes itself (i.e., the "chicken and egg" problem).
Remark 6 The causes and requires constraints constitute basic templates for the design of constraints and exemplify how constraints can support the safety engineer in characterising causality in a state-based relational way.
These dimensions give rise to various types of constraints. 15Table 1 describes types of constraints useful in risk analysis.Some of these constraints have been implemented in YAP [Gleirscher, 2018a] which enables their use based on previous discussions in Gleirscher [2017, 2018a,b].However, their comprehensive treatment would exceed the scope of this work.As we have seen, constraints prune irrelevant state pairs and, this way, determine the shape of (R, →).This mechanism is reflected by the following definition: Definition 13 (Relational Semantics for Constraints) For a set of constraints C ⊆ C, Constraints enable a top-down way of specifying risk models over risk spaces.In the context of the composition of risk spaces, this alternative way requires the definition of the following healthiness or well-formedness condition: Definition 14 (Well-formedness of Constraints) Let R(F) be a risk space formed by a family of risk factors We say that a constraint c ∈ C is well-formed for R(F) if and only if it does not refer to risk factors other than the ones in F, formally, if and only if

Compatibility of Risk Factors
Here, we continue with the methodological considerations from Remark 4 on consistent and meaningful interval and probability specifications.
For example, assume to have two consequences C 1 and C 2 associated with correct16 severity intervals [l 1 , u 1 ) and [l 2 , u 2 ).Two risk factors f 1 and f 2 can model three situations with their individually estimated intervals f 1 .sand f 2 .s: the activation of certain risk factors can be immediately followed by their (!) deactivation offRepair F ⇄ 0 F ′ → -s 1:0 =1 the activation of certain risk factors cannot be immediately followed by their deactivation 1. f 1 and f 2 share all their consequences, for example, C 1 : If both factors get active, the convex hull can be backed by a meaningful consistency condition: 2. f 1 and f 2 do not share any consequences, for example, f 1 models C 1 and f 2 models C 2 : If both factors get active, the convex hull extends both the range of consequences and severities and the consistency condition {f 1 .s,f 2 .s}* ⊆ {[l 1 , u 1 ), [l 2 , u 2 )} * seems inappropriate.For example, if C 1 and C 2 signify the damage of two independent objects with the same interval, the convex hull would not account for this because of idempotency of interval union.
3. f 1 and f 2 share some of their consequences: For example, f 1 potentially damages objects A and B and f 2 potentially damages objects B and C. If both factors get active, the convex hull merges information about all consequences and, hence, results in a combination of the cases 1 and 2.
While we can abstract from consequences shared by all risk factors, the treatment of partially shared consequences requires more care.The cases 2 and 3 can be modelled by an additional risk factor f 3 that is caused by f 1 and f 2 and carries an up-shifted severity interval, for example, f 3 .s= f 1 .s+ f 2 .s. Below in Section 5, we will discuss how such a dependency can be specified by constraints on the risk space, that is, by {f 1 , f 2 } causes {f 3 } and {f 3 } excludes {f 1 , f 2 } (described in Table 1).
Based on this analysis, we call a set F of risk factors compatible if there is a meaningful way of combining the severity intervals for each subset F ′ ⊆ F. For methodological support in achieving and maintaining compatibility, the next paragraph exemplifies formal considerations of the consistency of severity intervals.
Factor Characteristics and Dependencies.Constraints over risk spaces typically have implications on the characteristics of risk factors such as severity intervals: • For example, for a constraint f 1 causes f 2 , it is reasonable to claim f 1 .s⊇ f 2 .s.
• Analogously, for a constraint f 1 requires f 2 , we might wish to see f 1 .s⊆ f 2 .s.
An in-depth analysis of techniques for making F compatible and a detailed discussion of a complete set of rules relating factor characteristics and factor dependencies are out of scope of this work.

Risk Structures
Risk spaces are an abstract domain that can be used to equip processes with risk awareness and for the assessment of risk mitigation capabilities of such processes.R represents the unconstrained composition of risk factors.Events and transitions between risk states are ignored.Instead of considering R in its full extension, we can select subsets of R for risk models of specific applications.Even if not entirely known in advance, risk in a specific application will usually have a specific structure that might be more adequately represented by a constrained composition of risk factors, eventually taking into account events and transitions between risk states.Specifically, based on constraints on the combinations of the risk factors' phases and on the synchronisation of events corresponding to the CSP model of concurrency, we specify • which region of R we want to pay attention to (i.e., scope of safety guarantees) and • which region of R we consider to be safe for a process P (i.e., conventional safety).
In the following, we make use of risk factors' transition relations, define a form of parallel composition, and discuss consistency, well-formedness, and validity of risk structures.
Definition 15 (Risk Structure) A risk structure R is an expression of the form where p ∈ Ph f (i.e., 0 f , f, f) for any risk factor f ∈ F ⊆ F (Definition 3) is an atom and C ⊆ C is a set of constraints (Definition 13).S denotes the set of all risk structures, consequently, including the set of all phases of risk factors f∈F Ph f ⊂ S.
The operator signifies the parallel composition of two risk structures, and the operator [•] C applies all constraints in C to a risk structure.The semantics and algebraic properties of these operators will be discussed below.
• the risk space R (Definition 5), • the event set Σ (Definition 1), • the transition relation → ⊆ R × (2 Σ \ ) × R (Definition 16), and Furthermore, let scope : S → 2 F be a map that identifies the risk factors referred to by a risk structure.The operational semantics of each construct of the language in Definition 15 are provided below.

Atoms
The operational semantics of a single risk factor f in phase p ∈ Ph f is given by [[p]] r = (Ph f , Σ f , → f , {p}) and the elements of this tuple are given by Definition 3 and described in Figure 2.

Parallel Composition
Let A, B, F ⊆ F be sets of risk factors with A, B ⊆ F and, for i ∈ {1, 2, 3}, R i ∈ S be arbitrary risk structures according to Definition 15 with ).For situations where several safety engineers are trusted with the task of risk analysis of a complex safety-critical system, we define what it means to combine two risk structures with intersecting sets of risk factors, formally, scope(R 1 ) ∩ scope(R 2 ) . Because a single risk factor cannot be in two different phases at the same time, we employ a corresponding constraint to uniquely define the transition relation resulting from parallel composition: only those risk states can be combined whose risk factors in the shared scope are in their identical phases.According to the discussion on page 13, the following predicate encodes this constraint: Formula (1) and lemma 1), , the composed transition relation → is given by the following step rules: These step rules together resemble the step law of generalised parallel composition in CSP [Roscoe, 2010, Sec. 3.4] and can be used to determine the set of reachable states reach( Remark 7 The side conditions in the rules of this composition operator prohibit any behaviour leading to inconsistent states.In other words, the composition constrains the behaviour of two risk structures with an overlapping scope.With Lemma 15 below, we will further discuss another form of behavioural constraint and the combination of composition and constraints for risk modelling.
Were we to use arbitrary CSP processes as atoms and were we to allow different interfaces for each use of the parallel composition operator, several risk structures, when composed, would not guarantee freedom from interference, exhibit different event and state traces and, consequently, the order of their composition would lead to different risk models.Associative laws for generalised parallel composition in CSP are not universally applicable.The discussion by Roscoe [2010, p. 60] highlights how differences in the alphabets shared between each pair in a set of risk structures would entail meaning to the order in which these pairs are composed.For example, in CSP, the equality (P In case of X Y , one has to compare the traces of the composed processes to prove specific guarantees to be preserved by their composition.This can be computationally complex.More general forms of processes and composition have been dealt with in formalisms such as Circus [Oliveira, 2005, Oliveira et al., 2009] and FOCUS [Broy and Stølen, 2001].
Overall, we obtain two advantages from the use of risk factors as atoms in risk structures: This way, all atoms have exactly the same alphabet Σ F , that is, all factors have to always agree on their view of the same overall process P.
Overlapping scopes in CSP terms then mean copies of risk factors that will be reduced by idempotency (see Lemma 7 below).The transformation of risk factors into CSP as described on page 12 encodes all information of the risk space R into the processes 0 f , f , and f.This way, we restrict the use of generalised parallel composition to the use of synchronous parallel composition.

Algebraic Properties of Composition ( ).
The following discussion shows some properties desirable of the (parallel) composition of risk structures.
Lemma 7 (Idempotency of ) For any R 1 ∈ S, we have Proof 11 (Proof of Lemma 7.) From Definition 5, by Lemma 1, R 1 = R 1 ⊗ R 1 is preserved.Using the convention from Section 3.2, we can apply the rules from Definition 16 as follows: Because of the uniqueness of R 1 , the composition takes two consistent views of R 1 offering identical states and events at each step.So, because of σ 1 = σ 2 and e = f , the antecedents in the rules -l,r can never be satisfied and the rules -pl,pr produce infeasible transitions to be pruned according to Definition 3. The rule -b turns into a tautology and both views always exhibit an identical transition: Furthermore, the symmetric duals (i.e., -r is the dual of -l, -pr is the dual of -pl) of the rules in Definition 16 yield the same → and, hence, the same Σ u .
Lemma 9 (Associativity of ) Proof 13 (Proof Sketch of Lemma 9.) Because of empty shared scopes, the side conditions will always hold on both sides and states will be merged by disjoint union.Set operations on states and events are commutative and associative.As pointed out in Remark 7, risk factors (Definition 3) always share the whole alphabet Σ F , that is, they synchronise on all events (Section 3.2).Interleaving is avoided, the alphabetised parallel operator takes Σ F on both sides and, this way, reduces to synchronous parallel composition for which a general associative law is available.
Lemma 9 allows the safe use of f∈F 0 f as a shortcut for ((. . .(0 Lemma 10 (Associativity of ) For any R 1 , R 2 , R 3 ∈ S with equal scopes, that is, scope(R 1 ) = scope(R 2 ) = scope(R 3 ), we have Proof 14 (Proof Sketch of Lemma 10.)The proof is analogous to the proof of Lemma 9.

Constraints
Section 5 introduced the concept of relations over risk spaces with the aim of shaping (R, →).Based on this concept, we now discuss how constraints can be embedded into an operator of the language of risk structures as introduced in Definition 15.This way, constraints from a redundant specification of beliefs about an application and operational risk associated with this application.Such redundancy can be used to identify inconsistencies between R and the real world, potentially helpful in model refinement, completion, and validation [Gleirscher, 2014].Particularly, these inconsistencies allow choices for their resolution.We will make such inconsistencies explicit as follows.
For a set of risk factors F ⊆ F and a risk state σ ∈ R(F), we call the risk structure R σ = [ f∈F phase(σ, f)] C with R 0 = {σ} the characteristic risk structure of σ.From a risk state σ or its characteristic structure R σ , we distinguish three ways to determine the set of transitions σ − →: This framework gives rise to the following types of inconsistencies: 1.The relation σ − → 1 \ σ − → 2 describes sensible transitions with invisible events signified by τ c .We choose to prune transitions from → if they deviate from what is provided by the risk factors.

The relation σ −
→ 2 \ σ − → 1 describes violent transitions.We choose to prune transitions from → if they violate constraints and, hence, lead to inconsistencies in R.

The relation σ
In →, we choose to label such transitions with a τ p , making them subject of process-driven disclosure of R.

The relation (σ −
→ 1 ∩ σ − → 2 ) \ σ − → 3 describes unrealised transitions.We choose to prune such transitions from →, because they are not realised in P and, hence, would only add little value to R.
This case analysis suggests several possibilities to design a semantics for constraints.As indicated above, we will investigate the following transition relation →: 4. reduce to P Fortunately, we have σ − → 1 \ σ − → 2 = because we require all risk factors to be input-enabled.
Definition 17 (Constraint) Let R be a risk structure (Definition 15) with and C a set of constraints well-formed for R (Section 5 and definition 14).Then, the constrained form This definition incorporates the handling of inconsistencies according to the cases 1 and 2. The cases 3 and 4 can be helpful in the incremental construction of R.

Algebraic Properties of Constraints (
The following lemma incorporates that the order in which single constraints are applied to R does not matter. Lemma 11 (Exchange) Proof 15 (Proof Sketch.)The proof is by induction over C 1 , supported by an additional lemma for the induction step, and takes advantage of associativity of set intersection as used in Definition 13.The detailed proof is stated in Proof 24.
Lemma 12 (Idempotency) Proof 16 This lemma follows from Lemma 11 and idempotency of set union.
Lemma 13 (Commutativity) Proof 17 This lemma follows from Lemma 11 and commutativity of set union.
Lemma 14 (Associativity) Proof 18 This lemma follows from Lemma 11 and associativity of set union.Lemma 15 establishes a meaning of constraints that we might usually expect from a methodological viewpoint.The constraint operator prunes the transition relation of R according to a relational specification of risk in terms of a set of constraints.
Finally, we discuss the special case that constraints are applied to an atom p ∈ Ph f associated with risk factor f. Observe that scope(p) = {f} and, by Definition 14, C is well-formed for R(scope(p)) if C contains only constraints that refer to f.Hence, our previous convention entails C ′ = for the largest well-formed C ′ ⊆ C. From this observation, we obtain

Discussion
Here, we briefly discuss potentials of risk structures to be used as a formal artefact in failure assessments (Section 2.2) and in the design of safety monitors (Sections 1.2 and 2.2).
Integration with Failure Analysis.As indicated in Section 5, fault trees (Section 2.2) can be transformed into risk structures by using dependencies.Because of the step semantics underlying constraints (Definition 13), such a translation is also possible for gates like PAND or POR17 as used in dynamic fault trees.This way, the presented approach offers the possibility to use fault trees generated from architectures to be used as a risk structure or to be integrated into an existing risk structure.This leads to a practical way of combining risk factors internal to an autonomous robot with risk factors stemming from its operational environment.
Monitoring.Each risk factor f (Figure 2) can be implemented as a monitor of the process P. Events of f can be translated into observers (i.e., sensors and variable checkers) of transitions maintaining the monitor state and of transitions leading to a phase change.Consequently, implementing the events and phases of each risk factor by observers allows the use of a risk structure R as a monitor of P's risk state.
Given R and P, we can devise an incremental and concurrent approach to monitoring: While the situation of P is monitored, P's risk state is monitored and continuously evaluated according to this situation.Risk monitoring then comprises two parts: safety monitoring for the detection of endangerments (i.e., violations of safety properties), and co-safety monitoring for the detection of mitigations (i.e., acceptances of mitigation properties).For learning τ c (Section 6.3), both monitoring tasks can be carried through in an incremental mode, that is, introducing non-existing states and transitions according the given states.
When using a risk factor as a monitor automaton for P, how do we deal with nondeterministic risk factors?In nondeterministic risk factors, from an observed phase and after an observed event, P may have reached several distinct phases and, thus, several distinct risk states.Without further information the monitor does not know which of these states has actually been reached by P. The monitor would then be in a risk region which is ordered and bounded by min ≤ m /max ≤ m as discussed in Section 4.4.Given that the phases of risk factors carry information about P's state in terms of disjoint state invariants, we can design state estimators into our monitor.These estimators might gradually be able to restore some of the lost state information and again uniquely identify P's actual risk state according to R.

Conclusions and Future Work
The certification of robots and autonomous systems requires the validation and verification of their controllers.High automation requires these controllers to have risk monitoring and handling functions.Complex machines and environ-ments will make such functions complex as well.Such complexity might best be handled by incremental construction and verification.It is therefore helpful to have a compositional method that allows these functions be incrementally modelled and assessed at design time.Eventually, these models will be transformed into verified run-time monitors and mitigation controllers that implement these functions.
In this work, we discussed a formal framework for analysing risk of an autonomous system in its operational environment and for constructing corresponding models that represent risk monitoring and handling functions for this system.Algebraic laws over these models support systematic design and help the engineer to handle large models.We close the discussion with an explanation of how the risk model can be used for verified monitor synthesis.
Future Work.Further steps in developing this framework will include • the addition of probabilistic semantics to risk factors (Section 3.1), • the extension of the presented framework to more general forms of risk factors (Section 3.1), • the construction of risk lattices from risk spaces, mitigation orders, and event structures (Section 4), • the development of reachability analysers for risk estimation and synthesis of mitigation strategies, based on subsets of the risk space associated with a process state and dynamics (Section 4.4), • the use of a dynamical model of the process for the reachability analysers (Section 4.4), • the enhancement of the discussion of factor dependency constraints (Section 5), • the investigation of distributivity of composition and constraints in risk structures (Section 6.3), • a mechanisation and extension of the given proofs in Isabelle/HOL.In A. Skavhaug, J. Guiochet, and F. Bitsch, editors, Computer Safety, Reliability, and Security -35th International Conference, SAFECOMP 2016, Trondheim, Norway, September 21-23, 2016, Proceedings, volume 9922 of Lecture Notes in Computer Science, pages 253-265. Springer, 2016. ISBN 978-3-319-45476-4. doi: 10.1007/978-3-319-45477-1_20.N. Warburton.Philosophy: The Basics.Taylor & Francis Ltd., 2012. ISBN 978-0415693165. doi: 10.4324/9781315817224.

A Nomenclature
See Table 2.

B Proof Details
This section collects some of the more detailed proofs.
We show (i): Let σ ∈ R(F 1 ∪ F 2 ), then, by definition, σ is a total injection and, hence, every restriction of σ is a total injection, particularly, the restrictions σ| F 1 and σ| F 2 .Obviously, we have σ| F 1 (f) = σ| F 2 (f) for all f ∈ F 1 ∩ F 2 .Furthermore, by definition, σ(f) is faithful to Ph f for all f ∈ F 1 ∪ F 2 and, thus, so are both these restrictions.These two results lead to σ 1 = σ| F 1 ∈ R(F 1 ) and σ 2 = σ| F 2 ∈ R(F 2 ) and, finally, to the existence of the wanted pair.
Proof 21 (Proof of Lemma 2.) (F, ∪) is a semi-group because ∪ is an associative binary operation on F. We have that (The sub-proof based on the definition of ≈ is omitted here.)Based on Formula (12), we show by algebraic manipulation that the binary operation ⊗ on R is associative: R(F 1 ) ⊗ (R(F 2 ) ⊗ R(F 3 )) )) ⊗ R(F 3 ) Hence, (R, ⊗) is a semi-group, too.Lemma 1 then completes the proof.
Proof 22 (Proof of Lemma 4.) For this we only need to show that any two risk states σ, σ ′ ∈ R are (i) comparable and (ii) antisymmetric: We show (i) by showing that the conditions (a) and (b) guarantee the comparability of any two risk states in R based on the interval order ≤ as defined above and use the fact that comparability can be dropped from R/∼ s to R using 3)).We have to consider the following three cases to complete the sub-proof of ( i For this, prove • the case of incomparable phases (σ(f), σ ′ (f)) f : In this case, the state pair gets incomparable and the implication is trivially fulfilled, • the inverse case σ(f) ≻ f σ ′ (f): In this case, the state pair gets incomparable and the implication is again trivially fulfilled, and • the aligned case σ(f) f σ ′ (f): In this case, the state pair stays comparable.However, σ ′ (f) can either be f (hence, σ(f) = f and maintaining ⊆), f (hence, σ(f) ∈ {f, f, 0 f } and maintaining ⊆), or 0 f (see former sub-case).
Having proved these cases completes the induction step by establishing IS.
For m , we only have to substitute case 1 of the IS: For f, the smallest f after Definition 3 implies incomparability at most for (0 f , f) and (f, 0 f ).These two phase pairs of f would not alter active of both states and hence maintain ⊆ as well.
Proof 24 (Proof of Lemma 11.) From associativity of set intersection in Definition 13, we know that the order in which constraints are applied to R × R does not matter.Consequently, we also have (by Formula (13))

For
an arbitrary C ⊆ C, we relax Definition 17 by establishing the following equivalence: [R] C = [R] C ′ for the largest C ′ ⊆ C well-formed for R (Definition 14).Then, [R] C denotes the risk structure resulting from applying only and exactly the constraints in C ′ .Moreover, Definition 17 generalises Definition 16 by guarding the -step.Definition 17, together with the relational pruning for three out of four cases according the analysis on page 25, yields the following general refinement law: Lemma 15 (Constraints Always Refine) R ⊑ T [R] C Proof 19 The proof is by showing that traces(R) ⊇ traces([R] C ). Fix t ∈ traces([R] C ) with t = f ˆl for induction over the split of t into f and l.Induction step: Assume that f ∈ traces(R) (IH).With l = e ˆl′ and t = f ˆ e ˆl′ , there exist σ, σ ′ ∈ R such that σ e − → C σ ′ and, according to Formula ([•] C -step), such that σ e − → σ ′ and (σ, σ ′ ) ∈ [[C]] c .Because of σ e − → σ ′ , there must be a trace f ˆ e ˆl′′ ∈ traces(R).e is part of l and, therefore, t such that we complete the induction step and establish a new IH.(We do not need to prove the equivalence l ′ = l ′′ .)The case f = provides the induction start.

Table 1 :
Overview of useful constraints.Legend: See dimensions on page 20, Y . . .implemented in YAP.