Who’s in Charge Here? A Survey on Trustworthy AI in Variable Autonomy Robotic Systems

This article surveys the Variable Autonomy (VA) robotics literature that considers two contributory elements to Trustworthy AI: transparency and explainability. These elements should play a crucial role when designing and adopting robotic systems, especially in VA where poor or untimely adjustments of the system’s level of autonomy can lead to errors, control conflicts, user frustration, and ultimate disuse of the system. Despite this need, transparency and explainability is, to the best of our knowledge, mostly overlooked in VA robotics literature or is not considered explicitly. In this article, we aim to present and examine the most recent contributions to the VA literature concerning transparency and explainability. In addition, we propose a way of thinking about VA by breaking these two concepts down based on: the mission of the human-robot team; who the stakeholder is; what needs to be made transparent or explained; why they need it; and how it can be achieved. Last, we provide insights and propose ways to move VA research forward. Our goal with this article is to raise awareness and inter-community discussions among the Trustworthy AI and the VA robotics communities.


INTRODUCTION
Today, there is ample discourse around transparency and explainability for Artificial Intelligence (AI) [9].In the Variable Autonomy (VA) community, the conversation is growing slowly in comparison.In VA, a team consisting of human(s) and intelligent robot(s) share decision-making authority and control throughout their joint mission, adjusting the levels of autonomy as needed from teleoperation to full autonomy [23,25,40,73,86,91].As with any such teaming effort, they are interdependent to the extent that their communication and shared understanding are pivotal to task completion.In the best case, a failed VA mission can result in the system's retirement-users and industries will simply not adopt it.In the worst case, however, a failure can result in an incident that puts the human operator and other members of society at risk of ill effects.
The fear of such ill effects and the overall need for management of the technology was one of the key take-away messages from past Eurobarometers-public opinion surveys regularly performed on behalf of the EU Institutions-focusing on robotics and other automation technologies [1,2].The following year, the European Commission's High-level Expert Group released its first publication: the "Guidelines for Trustworthy AI" where it defined trustworthiness as the combination of ethical, socio-legal, and robustness factors [32].These guidelines are just one of many policy documents advocating for the adoption of AI and robotic technologies [44,101].They discuss, at least at a high level, how to govern AI technology [96].Patterns emerge from these recently published guidelines; they advocate-rather abstractly-for similar design principles and socio-ethical values.Notably, the majority of these guidelines promote the value of transparency the most [44].
Another value promoted in these documents is the principle of explainability.While the keyword explainability is often used interchangeably with transparency, these terms should not be conflated [9].Some policy documents, such as the "Guidelines for Trustworthy AI" by the European Commission, consider explainability as a facet of transparency.Others, such as the "IEEE Standard for Transparency of Autonomous Systems" [39,102], try to draw a clear distinction between the two: transparency discusses the extent to which an AI system reveals its internal decision-making to the human user or observer, while explainability concerns itself with the active ability to offer explanations about decision-making, even to non-expert users.A system that is transparent may not be explainable and vice versa.
Considering the significance of explainability and transparency in the discourse around trustworthy AI, we make both of these principles the primary focus of this article.Similar to the IEEE standard presented above, we consider them as separate values.However, the ultimate goal of both values is to ensure adequate and meaningful human control and calibration of trust in the system-concepts that sit at the core of our structured literature review-even if they use different means to achieve them.Furthermore, we turn our attention towards robotics, where an AI agent is physically embodied in an environment that it can perceive and interact with.Our decision to focus on "beings-in-the-world" is based on the additional challenges embodiment presents.For instance, constraints placed by sensors and actuators, as well as limited computational resources, demands that action-selection be as efficient as possible [14].Such efficiency may come at odds with attempts to transfer control back and forth to a human operator-is the case in VA systems.Moreover, the constraints placed by the physical actuators add additional latency and other delays in terms of transferring control.The physical nature of robotics system also results to anthropomorphism and challenges in other trustworthy AI aspects, for example, privacy.Within robotics, there is a fundamental need for real-time performance and stability but also understanding of their opaque emerging behaviour.This emerging behaviour produced due to complex interactions between robot agents, their environment, and other agents, is not easily observable or predictable; transparency and explainability become necessities if an appropriate amount of trust is to be maintained [97].
Trust is considered fundamental for teaming efforts in general and human-robot teams are no exception [54].We would not consider a human to team up with automation in the same way as with an autonomous agent [59]; humans rather use automated tools.We focus on embodied autonomous systems in this article.The term "autonomous" allows an agent to govern itself in a broader sense by introducing some notion of intelligence in decision making [13,27,55].Meanwhile, automation at large seeks and controls specific elements of specific procedures under tight constraints [53].
We recognise that there are multiple definitions of the terms "automation" and "autonomy" in the literature, however, discussing all of them is beyond the scope of this article.Instead, we focus exclusively on embodied autonomous systems, which are considered an overlapping field to human-automation systems [42].In a robotic system, different autonomous behaviours are often aggregated and interdependent; for instance, a robot might be equipped with autonomous navigation, decision-making, perception, and planning, which introduces much more complexity than pure automation.We therefore view transparency and explainability needs to be more complex as well, tailored to the complexities of autonomy in robots.While similar, human-robot teams involve layered complexities that allow for "teaming" efforts, whereas human-automation interactions do not effectively allow for the same.
When autonomy is variable in a human-robot team-as is the case in VA systems-it is not uncommon for control conflicts to arise between the human operator and the robot while overriding each other's commands in an attempt to take control of the robot's actions [23,26,82], or for the control allocation to be sub-optimally used due to misaligned trust in the robot's capabilities [24,25].We argue that transparency and explainability are, therefore, critical considerations for any VA system to be effectively developed and utilised.While research into these concepts has gained traction in the robotics and AI literature, their investigation is fairly limited in VA.Maturing the discussion in this field is one objective this article aims to set in motion.Motivated by the existing opportunity to develop such transparency and explainability techniques for VA systems, we aim to examine the most recent contributions to the VA literature with respect to these concepts and understand the extent to which they are considered.Hence, our main contributions are as follows: -identify and quantify the number of publications that refer to, in some way, the concepts of explainability and transparency in AI and VA; -a qualitative evaluation of research done with respect to transparency and explainability, for example, how they are implemented and other design considerations; -reflect on the research direction taken by the VA community regarding transparency and explainability; -propose a systematic way of thinking that begins with the "mission" of the application domain, further determining "who", "why", "what", and "how"; and -identify gaps and future considerations for the development of transparency and explainability research for VA.
While there is a lot of literature on human-robot teaming (see surveys [43,66,93,103,104] for example) we explicitly focus our contributions towards trustworthy human-robot teaming in situations where the levels or degrees of autonomy change.VA systems in human-robot teaming require special care when it comes to due diligence on when to allow change of autonomy and which actions an agent may assume control over compared to "traditional" human-robot teaming systems where the human and the robot(s) have more static roles.It is, hence, important to examine this special case of human-robot teaming on its own.This article is structured as follows: first, we establish a foundation with relevant terminology and corresponding definitions in Section 2.Then, in Section 3, we describe the structured approach to which this survey is conducted.In Section 4, we summarise the results we gathered, both quantitatively and qualitatively.The discussion is presented in Section 5, where we unpack the implications of the work that was covered.Finally, we conclude with Section 6, where we draw our final takeaways and call for more collaborative efforts in making systems with VA more explainable, transparent, and trustworthy for all.

BACKGROUND
In this section, we provide foundational information about the concepts of transparency and explainability, understanding how they are discussed more broadly within the scope of AI.Then, we will drill down on how these concepts translate across to the robotics community and further discuss the potential benefits for systems with VA.For a broader overview, we provide a breakdown of the target concepts and their relations to one another in Figure 1.

Transparency
Transparency as a value in public policy was popularised in the aftermath of the Watergate scandal; implying openness in data and processes such as to fight corruption and ensure accountability [11].Transparency made its way in the AI and robotics literature with a variety of definitions [54].Amongst others, transparency implies exposing the confidence level for a prediction [30], alerting the user of unexpected behaviour [47], or the means to revealing how and why the system made a particular decision [21].
When we refer to transparency, we do not mean that every algorithmic detail of an AI system is "exposed/".In fact, this approach may be counterproductive, inducing cognitive overload and impacting the user's ability to process information [69,97].Transparency should instead induce comprehensibility; the EU's civil law resolution for robotics also alludes to this point by emphasising that it "must always be possible to reduce the AI system's computations to a form comprehensible by humans" [95].Here, a link is made between "maximal transparency" and the robot's "predictability", where transparency is described as the ability to offer a rationale for autonomous decision-making that may be impactful to people's lives.The "IEEE 7001 Standard on Transparency of Autonomous Systems" considers that for adequate transparency, each stakeholder may have specific "appropriate to the stakeholder" information regarding "the system's status either at a specific moment or over a specific period or of the general principles by which decisions are made" [102].
Transparency can be seen as a means of ensuring accountability [15,48].By maintaining a record of machine transactions and the logic that influences decision making, if failures occur, then retrievable records of the incidents ensure attribution of accountability [101].Such traceability for the correct attribution of accountability is essential for meaningful human control [85].Others have connected the principle of transparency to that of fairness [68] and robustness [94,105].By ensuring adequate transparency, we can detect harmful biases and bugs in the system, helping us address incidents.
Transparency is often also linked to the concept of calibration of trust [54,97] as it allows the user to adjust the level of trust they place in the system.In a transparent system, trust is not only matching the system's superficial and momentarily observed performance but also the capabilities and limitations of the whole working envelope (often not observed in short duration), based on the internal workings of the system.Without adequate calibration of trust, users may overtrust or distrust the system, leading to misuse and disuse, and subsequent safety concerns [53].By misuse, we refer to the using the system outside of its intended purposes or specifications.By disuse, we refer to using the system at a reduced capacity, if at all.
Transparency is also tethered to explainability in that it is one way in which explanations can be offered.While these terms are often conflated in the literature, they should be understood for their distinct properties and effects.An explainable system is not necessarily transparent, and vice versa [102].

Explainable AI
As a field, Explainable AI (XAI) offers tools and methods to reveal details of the decision-making process of AI systems.Especially now, due to Machine Learning (ML) methods demonstrating their significantly improved performance, XAI has been a popular topic of discussion.The growing complexities of deep neural networks contribute to their opacity; they are considered to be "blackbox models" by users and developers alike.The ambiguity of how decisions are made in black-box models is raising alarms now more than ever as AI systems and society become more intertwined.The result is a call for more transparency, explainability and interpretability.
In this article, we consider explainability and interpretability to carry different meanings [56,65,92], despite the terms being used analogously in some instances of the literature.We consider an explanation to be an interface of understanding between the user and AI system [9].By explainability, we refer to the AI system's active offering of explanations geared towards the user.By interpretability we refer to the extent to which an AI system or ML model can be promptly understood [83]; it is a formal understanding in a computational sense.
Some ML models are more interpretable than others; simple decision trees, for instance, are considered interpretable due to the ease with which one can follow knowledge transfer, while Deep Neural Networks (DNNs) do not offer the same opportunity for interpretability due to model complexity.Understanding black-box models like DNNs therefore require XAI methods and techniques; the field distinguishes between a number of them as follows: -Global explanation: an explanation of overall model behaviour.For instance, Concept Activation Vectors (CAVs) [46] is a global explanation technique that attempts to describe the inner workings of neural networks using human-understandable concepts.-Local explanation: an explanation of a single instance of the model output.Local Interpretable Model-agnostic Explanations (LIME) [80] is one example of a local explanation technique based on the assumption that all complex models are linear on a local scale.
-Post-hoc explanation: a description (based on examples or evidence) offered for why the system arrived at its predictions after the model has been developed [9], and not baked into the model itself.
The field of robotics presents new challenges for explainability that go beyond data-driven XAI techniques [8,72,90].In order to achieve explainability, the robot is expected to communicate its actions and the reasoning behind its decisions, often to layman users, in a time-appropriate manner.This involves steps in generating explanations based on the internal models of the robot, communicating explanations using an appropriate available modality, and further ensuring the explanations are understood by the human receiver.To achieve this, considerations for Theory of Mind (ToM) are made, where mental models of both robots and humans are developed to depict and predict their respective behaviours.
-Mental model: an internal-and influential-representation of the external environment or something therein [81].This representation may include various state characteristics such as knowledge, beliefs, desires, intentions, emotions, and so on.-ToM: forming mental models of others (and ourselves) in order to explain behaviours [37].
Misalignment of mental models can result in skewed perceptions and misunderstandings that are detrimental to trust.Trust is also essential for effective teaming, where joint activities may demand some level of interdependence between actors.In this article, we adopt Johnson et al. 's definition of interdependence and subsequent dimensions of coactive design [45].
-Interdependence: "the set of complementary relationships that two or more parties rely on to manage required (hard) or opportunistic (soft) dependencies in joint activity" [45].-Observability: the extent to which one's status is perceivable by others.
-Predictability: the extent to which one's actions are reliably foreseeable to others while considering their own actions.-Directability: the ability to influence the behaviours of others and vice versa.This includes task allocation, suggestions, or warning signals, to name a few examples.-Co-active design: an approach to the design of human-robot teams where each role is sophisticated and applied in complex domains, and interdependence is the core principle for performing the joint task(s) [45].
Interdependence is relevant to VA, where control is shared amongst humans and autonomous robots.In this context, a lack of understanding and trust also limits the extent to which levels of autonomy are efficiently adjusted [24].Next, we describe terms from the VA community.

Variable Autonomy and Control Allocation Frameworks
Previous literature has used various terms to roughly indicate the general case in which control is allocated or shared between humans and robots' AI and/or robotic systems in which the levels or degrees of autonomy can change.The Level of Autonomy (LoA) refers to the degree to which the robot, or any artificial agent, takes its own decisions and acts autonomously [91].In the case of robotic systems, LoAs can vary from the level of pure teleoperation (where the human has complete control of the robot) to full autonomy (where the robot has control of every capability).
The terms used in the literature include but are not limited to, VA, shared control, shared autonomy, adaptive and adaptable autonomy, adjustable autonomy, and sliding autonomy.Some of these terms are also being used to denote a specific kind of control allocation paradigm or interaction.For example, shared control and shared autonomy are used to denote that control is shared between a human and a robot's AI in a continuous and congruent fashion (for example, control commands blending) [4,89].In this work, we use VA as an umbrella term that characterises human-robot systems and human-robot teams in which different levels or degrees of autonomy can vary dynamically during task execution.
An important aspect that characterises VA is the control allocation authority over the degrees or levels of autonomy.Which agent (human or AI) has the explicit authority to initiate control allocation changes (for instance, the level of control over the robot)?We consider the following three paradigms of control allocation authority to fall within the scope of what we consider VA: -Human-Initiative (HI) robotic systems in which only the human has the authority to allocate control ( [25]).This includes systems in which the robot operator uses their judgement [25] or advice from an AI system [10] to initiate control allocation changes.-AI-Initiative (AII) robotic systems in which only the AI system (within the robot) has the authority to allocate control.This includes robot initiative systems ( [23] and shared autonomy systems [89]).-Mixed-Initiative (MI) robotic systems in which both types of agents (humans and AI) have the authority to allocate control and initiate changes in control allocation [23,43,70,76].In VA, understanding when, where, to whom, and why the transfer of control is needed can be critical to the system's effective use [60].Such an understanding requires both transparency and explainability, and is the main motivation for this article.For example, an important aspect of VA is the control hand-off coordination strategy [43] when control allocation or LoA is changing.Human operators need a sufficient understanding of the reasons behind such changes.Changing LoA or taking and giving control to the human operators without sufficient explanation, the mismatch between the goals and expectations of the humans and robots without appropriate communication and understanding, can lead to detrimental situations such as conflicts for control [23,26].Proposed solutions to the conflict for control, such as negotiating the decisions [82], often do not explicitly consider transparency and explainability.

SURVEY METHODOLOGY
To scope the VA robotics literature, we used Elsevier's Scopus, 1 which offers an expansive multidisciplinary database of peer-reviewed literature.We decided to use Scopus over other platforms, for example arXiv2 or Google Scholar, 3 as we wanted to focus on peer-reviewed published articles in non-predatory journals.Our terms identification, article retrieval, and results processing are based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework for academic research surveys [64].Our methodology flowchart can be seen in Figure 2.

Queries Used
For the purposes of this review, we consider the AI and robotics literature combined, making sure to include the terms "robot", "robotic", "artificial intelligence" and "AI" as options in our search command.We limited publication date range between 2018 and 2023, that is, the past 6 years at the time of writing.We choose to focus only on the most recent literature as-driven by the ongoing AI summer-there has been a boost in research interest related to trustworthy AI. Figure 3 shows the significant increase of publications in trustworthy AI as indexed by Scopus.December of 2018 also marks the release of the High-Level Expert Group's draft ethics guidelines for trustworthy AI,4 further motivating our selected start date.In this review, one of the things we investigate is whether the same uptick can be seen at the intersection of those topics with VA.To survey the extent that transparency and explainability are considered in such publications, we include the terms XAI, transparency, for example.As discussed in the previous section, these principles are important for fostering trust in AI systems, and it has been noted in the VA literature that trust has significant influence on the extent to which operators utilise this ability to transfer and share control.An interest in studying and incorporating trustworthy AI for VA systems is therefore expected, and yet, to the best of our knowledge, the body of literature investigating this is limited.
As previously mentioned, we refer to the concept of varying the levels of autonomy assumed by human-robot teams as VA, but other terms that allude to the same were considered.These terms are: adjustable autonomy, adaptable autonomy, adaptive autonomy, sliding autonomy, shared control, and even interdependence.

Inclusion and Exclusion Criteria
We exclude articles in which "shared control" refers to robot teleoperation sharing between multiple human users.We also exclude articles that refer to interdependence, but describe human-robot teams without any adjustments in levels of autonomy between human and robot actors.We exclude articles that refer to the transparency of material that presents a challenge for vision systems to accurately detect and grasp in assistive robotics.Last, some titles found on Scopus were inaccessible to us as full text and were, thus, discarded from the review.A table with all excluded titles can be found in the Appendix A7.

Results Analysis
To analyse the results of the structured review, we categorised our results into multiple taxonomies.First, the content was divided into the application domain of the use case presented in the reviewed article.This resulted in an initial list of eiдht categories.Closely related to the application domain, we also identified the effects of XAI or transparency measured in the article, resulting in f our categories.Finally, we identified what information is deemed necessary to achieve those effects to create a taxonomy of systems based on the type of information conveyed.

Limitations
The limitations of our approach include the following considerations.Some of these keywords are used in the robotics literature but refer to different concepts, as outlined in Section 3.2 above.This adds a step in our structured approach that requires an initial screening and a subjective evaluation of whether the term used is as we understand and define it.This review is non-exhaustive, meaning there may be some relevant publications that were not found (for reasons such as not being indexed by Scopus) and therefore left out of the comparison and discussion.There may be some other keywords that refer to the concept of VA that were not considered in our search, contributing to this particular limitation.Additionally, there may be some publications that implicitly discuss the concepts of transparency and explainability but do not explicitly list the terms in their title, abstract, or keywords upon publication.

RESULTS
The search query used in Scopus returned 74 articles; 43 articles are excluded from this pool as they refer to different concepts using the same term, leaving us with 31 articles to review.It is noteworthy that if we do not use the terms XAI, explaina*, transparen*, interpret*, trustworth* in our search query we get 1, 049 publications.This means that a minimal fraction of the recent VA literature explicitly discusses explainability and transparency.Now, if we do the opposite and remove references to control allocation frameworks but keep keywords related to XAI and transparency, Fig. 4. Categorisation of each reviewed publication into one of the following application domains: assistive robotics, healthcare, autonomous vehicles, UAVs, industrial robotics, search and rescue, social robotics, and general (for those that discuss the concept of VA more broadly with multiple example domains).
we can see that the broader AI and robotics communities produced 10, 796 Scopus-indexed articles related to transparency and XAI.

Application Domains and Effects of XAI and Transparency
The texts reviewed span a variety of application domains, including: assistive robotics, healthcare, autonomous vehicles, social robotics, industrial robotics, search and rescue, unmanned aerial vehicles (UAVs), and social robotics.Others discuss the application of transparency, explainability, and trust more generally across multiple domains.The split of publications across the different application domains is shown in Figure 4. Across all application domains, we found the following four different effects measured or otherwise discussed: user trust and acceptance (relevant papers summarised in Table A1), efficiency and performance of the human-robot team (Table A2), robustness (Table A3), and, finally, socio-ethical legal obligations (Table A4).
In the remainder of this section, we discuss the literature from the perspective of their application domain; this structure is to demonstrate how the domain's mission is a starting point for shaping the needs of explainability and transparency.In the appendix, we present a tabular overview of how these concepts have been applied in the literature.
Marino et al. [57] thoroughly discuss the concept of trustworthy AI in their design guideline proposal, using an industrial robot as a testbed.The authors describe shared autonomy as "splitting decisions between humans and machines".More specifically, they distinguish between assigning the low-level tasks to machines, and the "more complex high-level decisions that have long-term consequences to humans".Directly, this paradigm is described as a way to ensure humans can maintain accountability-a critical element for trustworthy AI.In fact, trustworthy AI is used as the frame and motivation for implementing shared autonomy.At the same time, shared autonomy systems are motivated by the additional data that can be generated for system improvement.Lack of transparency and explainability are linked to concerns of intention, which is one aspect of trust.Similarly, lack of accountability for taking wrong decisions may also harm intention and people's perception of intention.Another aspect of trust is competence, which is impacted by system robustness, for example.The authors highlight that shared autonomy is one way of achieving Augmented AI, which they argue is a more responsible paradigm for developing and deploying AI systems into society.This is motivated by the observation that it a) provides a means of maintaining accountability by introducing the human somewhere in or around the loop and b) addresses issues of robustness by ensuring tasks are delegated to the appropriate human or machine counterpart depending on competence.Methnani et al. [60] make a similar argument motivated by the need for meaningful human control, especially in high-risk and safety critical scenarios.Issues of socio-ethical and legal obligations are also explored by Abbass et al. [3].The authors introduce the concept of symbiomemesis, defined as the "socio-technical phenomenon whereby humans and machines persist in their ability to logically work together" over a prolonged period of time.Transparency and explainability are highlighted as two fundamental ingredients in their formal computational model.In their work, a curriculum for machine education is presented that aims to embed the foundational human knowledge needed for social integration.
In the application domain of social robotics where humans and machines are collaborating as a team, social integration is a key consideration.Interdependence plays a significant role in teaming, and a lot of emphasis is put on ToM, trust, robustness, fairness, and ethics.Cantucci and Falcone [16] propose a cognitive architecture for the development of robots that can adjust their level of "social autonomy" with an emphasis on transparency and explainability.The authors use ToM and XAI as theoretical tools to inform their architecture design.In their work, the authors discuss how explanations can limit conflicts arising from errors and support recovery, thus establishing a robust system.After the robot presents its outputs, the user can require a natural language explanation detailing the motivation for the robot's adopted strategy.The authors emphasise that the ubiquity of human-robot teams requires their "well-trained [and] verifiable performance" ensuring that rules of social engagement are adhered to while performing autonomously.
Social implications also heavily influence the direction of research in the Autonomous Vehicle domain.Gilpin [38] presents a case for anticipatory thinking in autonomous vehicles in attempts to build trust.Here, the importance of ToM and self-introspection in stress testing AVs is emphasised.This includes the consideration for difficult scenarios and situations, for instance, "imagining possible futures" in the form of "hallucinations", "counter-intuitive physics" such as potential circumstances that result in the vehicle learning to engage both the gas and the brakes simultaneously, "counterfactual reasoning" that allows the vehicle to explain alternative decisions that would have resulted in different outcomes, and "commonsense reasoning" that allows for logical deductions when faced with new scenarios.Such reasoning and introspection is inspired by how humans formulate thoughts and make subsequent decisions.Domeyer et al. [29] go even further and emphasise the need for a complete re-imagining not only of vehicle design, but also of the road infrastructure, and considered social norms in order to smoothly integrate AVs into our society.The authors present a communication framework for information exchange between AVs and other road users.This work highlights the importance of considering other consumers of information from the AI system.Here, the authors discuss interdependence in communication, ensuring that a common ground is developed between road users.Interpretability is explored in terms of signals in ambiguous situations.The authors propose that algorithm, interface, and interaction design must ensure automation is observable for other road users-a passive stakeholder who is sometimes neglected.Undermined safety and acceptance of vehicle automation is brought forth as a consequence of poor communication.
Healthcare is another application domain worth highlighting due to its heavy regulation and the high stakes involved in clinical practice.Schleer et al. [88] use haptic guidance to assist cooperative surgeries.The force feedback is presented as a means of achieving transparency, ensuring the operators receive a complete grasp over the environment that they are tele-manipulating.The usability of the proposed system is evaluated and discussed.Most interestingly, the effects of haptic feedback are presented as controversial in the literature; some studies show a positive impact, while others show the converse.In their study, the authors show that haptic guidance offers improvements in perceived workload and system usability.While no performance enhancements were found, efficiency saw a subjective improvement.So, whether to employ haptic feedback is considered a subjective and contextual choice for transparency.
Sanchez et al. [84] developed a haptic foot interface for the continuous control of assisted surgical procedures.The intention is to alleviate operator fatigue while ensuring what is specified as transparency control.Haptics is introduced as a means of offering awareness of the interaction.More specifically, they seek to improve control over the mission at hand by alleviating the difficulty of simultaneously grasping the surgical gripper while performing the following specified gestures: reaching, aligning, and grasping of the target.The authors found that shared control improved task completion and efficiency, while decreasing the number of failed attempts to grasp using the surgical robotic tools.Shared control was also credited to have reduced both mental and physical workloads for the surgeon.The aforementioned performance metrics are selected based on the particular challenge they aim to address with their system, but it is unclear whether the haptic feedback specifically introduced for transparency contributes to them.Still, haptics are said to have contributed towards the platform's versatility in particular.
Tröbinger et al. [98] present a dual doctor-patient twin paradigm for the examination, diagnosis, and rehabilitation of patients remotely.The system-equipped with bidirectional haptic feedbackis described as one that enables a more comprehensive patient journey despite being fully remote.The transparency discussed here is of physical "closeness", where both the doctor and the patient feel as close as possible to the remote environment.This work promotes remote consultation and examination through bidirectional telepresence control.AI is used for anomaly detection, recommendations, and assisted diagnosis.Novel senses and robotic capabilities are utilised to support remote healthcare.Not only is increased access to doctors one motivation for this work, but also the augmentation of doctor knowledge and skills through AI technology (data-driven, behaviourbased, and model-based are all mentioned techniques).The digital twin is said to have the ability to jointly perform exercises with the patient, record "interpretable telemetry", further used to analyse progress in the patient's treatment plan.Ultimately, this work presents an interaction paradigm that involves multiple modalities in the implementation to support medical diagnostics and patient rehabilitation at a distance.

Implementation Considerations
Interface design is a significant discussion point in the literature when it comes to transparency and XAI.Transparency as a property of the interface can also be presented at multiple levels [6].The bare minimum demands a comprehensive view of robot internals such as state, goal, plan, and progress.Increasingly transparent interfaces should enable an accurate prediction of robot performance based on past evidence.These intend to address situational awareness issues and remedy human errors that result from poor communication.For instance, Ramesh et al. [77,78] promote the need to systematically quantify robot performance degradation over time to improve operator judgement and support explainability (e.g., via visual cues).They introduce the concept of "Robot Vitals" and "Robot Health", which are robot state indicators that correlate to its ability to continue operating without performance degradation; with the intent and potential to map them to LoA switches.
While some describe variations in transparency as a property of changing LoA [6], others consider the ability to adjust level of transparency a requirement.In their study, Olatunji et al. [67] look at the relationship between LoA and its impact on the level of transparency required in robotic table assistance (such as picking things from or putting things on a table) for the elderly.The authors utilised two levels of transparency in combination with two levels of autonomy.The levels varied in the amount of information offered to the user in text form via visual aid.The low level answered "what"-corresponding to the robot's state-while the high level answered "why"-offering a reason.The results showed that participants preferred less information while LoA was low.The authors attribute this need to the fact that paying attention to the information on display while actively operating the system was more cognitively demanding.When LoA was high, participants indicated a preference for more information to be presented.The ability to express the current state along with rationale also contributes to the observability and predictability of the system, two important considerations in co-active design of interdependence theory.
Wang et al. [100] discuss using co-active design for manned-unmanned aerial vehicles and graph neural networks (GNNs) for explainable human-robot collaboration, where the core principle states that autonomy is shaped by interdependence.Not only is LoA context-dependent, but also the level of cooperation, which could be perceived as the level of information exchange.ToM determines the team "balance".As per interdependence and co-active design principles, there are observable, predictable, and directable requirements among team members.The authors use graphs to describe Control Authority Adjustments due to their interpretability.The status of the human-robot team in a UAV setting is displayed as graphs for intuitive perception.Co-active design is promoted such that humans can effectively supervise and intervene as needed.
Other recommendations for improving transparency include operator training and allowing for the system to learn from experience [6,52].Qiao et al. [75] use Learning from Demonstration (LfD) to induce more interpretable motions in robotic assistance tasks like grasping objects and pouring liquids.The assistive motion is then blended with user input for successful task completion.The authors compared their approach with state-of-the-art shared control solutions and found that using LfD reduced task completion time, the number of inputs using joystick control, and angular difference between user input and assistive motion.Moreover, the authors note a higher subjective score for "user preference" and "perceived speed" using their LfD system.
Fearn et al. [33] also propose such feedback-based learning in their design of a smart-powered wheelchair equipped with shared control to help disabled people navigate indoor spaces, like their home, for instance, without the cognitive burden of performing difficult manoeuvres.The wheelchair is endowed with a vision-based navigation system, performing segmentation and classifications of objects to help determine the context of the environment.More notably, the authors do not assume sufficient robustness of the vision system, but rather ensure that it requests for labelling support from the wheelchair user in order to learn a map of the home environment.The requests from the system are proposed using natural language that is understandable by the user.Additionally, example-based prompts for confirmation are proposed, which offer some clues as to the reasoning behind particular predictions or intents of the system: "I see a kettle and a microwave.Are we in the kitchen?"Indeed, there are various approaches to communicating intent and the reasoning behind it.The literature explores multiple channels along which information can be exchanged.Many explore the ways in which visual interfaces, including Virtual Reality and Augmented Reality (AR) displays, can be utilised [58,77,78,[107][108][109]. Others propose verbal or non-verbal natural or artificial language approaches [3,33,63,99].A number of studies look into the use of haptic feedback as a communication interface, often supplemented with a visual component [74,84,88,98].In fact, many recognise the benefit-or even the need-of taking a multi-modal approach for richer information exchange [3,6,29,57,58,74].
Zolotas and Demiris [108] rely on visual interfaces for their assistive robotics work presenting an Explainable Shared Control (XSC) paradigm intended to allow both humans and machines to construct accurate mental models of one another.The risk of misaligned mental models includes obstruction and user frustration, which in turn results in users rejecting system assistance [107].Explanations are presented as a means of resolving the discrepancy between human's expectations of the system's plan and its actual representation.Exposing internal robot representations is one proposed approach for generating explanations, namely using AR as a bridge of communication between human and machine.Shared control systems depend heavily on continuous communication and implicit signals of intent.The XSC paradigm attempts to support this requirement using AR and head-mounted displays [107,108].Predicting intent runs two-ways: the human who perceives robot's intent and the robot that perceives the human's.While there is an opportunity to use deep learning algorithms to model and predict human intent, Zolotos et al. highlight that these "black box algorithms are notoriously difficult to explain to end users" [107,108].Instead, the authors promote their guidelines for predicting user intent in a way that is human-interpretable.For instance, in their study applied to smart wheelchairs for assistive robotics, Zolotas et al. [109] make use of differential drives of robotic wheelchairs and the human's joystick commands to determine intended movement trajectory [107].Furthermore, it is noted that transparency is not achieved by representing the low-level details of input and output data streams.Rather, to avoid cognitive overload, the need for high-level abstractions is recognised and used to indicate when threats of collision are detected.Predictive visualisations were used to tackle the question: "When does the robot decide to intervene?"[107].The study, which was conducted with participation from 18 able-bodied volunteers, confirmed that their system efficiently improved recovery from "adverse events" induced by intent misalignment.
Mbanisi et al. [58] also investigate communication of intent through the use of AR, but choose to include an additional output modality: haptic feedback.The authors present a haptic shared control framework for mobile telepresence robots navigating environments populated by humans.This framework considers transparency as well as social constraints for active collision avoidance.The authors propose this method of communicating intent as a means of enhancing mutual understanding and system transparency.An increased perception of system transparency and overall improved cooperation is hypothesised, but no experiments are conducted in the article.
The choice of AR visualisations for explainability is distinct in the shared control literature where the employment of haptic interfaces is more prevalent (for example, [74,84,88,98]).Pocius et al. [74] discuss the challenge of user acceptance of shared control systems despite the benefits of robot autonomy.Lack of transparency and the general perception of sacrificing control is considered one reason why users are reluctant to adopt and interact with such systems.Haptic feedback as an interface for information exchange during manipulation tasks is presented as one method of remedying this symptom.In the authors' study that looked at assistance in grasping tasks, they observed the effect that haptic feedback has on human subjects' ability to interpret intent.Participants acknowledged that they could predict the robot's intent, making statements such as, "I can predict that the robot is headed to the other bottle." However, they expressed frustration in that the action was counter to the human's high-level goal.While haptic feedback can support communication of intent, it fails to explain the reasons behind the robot's decision.
Transparency is discussed in the VA literature beyond expressing intent and reasoning behind actions.With situated robots in particular, there is much emphasis on the need for the operator to feel as close to the remote environment as possible and feel the least resistance from the remote system.Some describe this as mechanical transparency [6], but it can be more intuitively described as physical transparency.Physical transparency ensures that the teleoperated robot, when under the control of the human, follows the human actions exactly-a quality that can degrade with increased communication latency between the human and the remote system.This type of transparency is discussed in the domain of healthcare for example,-particularly in surgical robotics where the precision of actions and perceptions are critical to the operation at hand.Table A5 in Appendix A summarises the articles found that discuss this type of transparency.

Geographic Analysis
In Figure 5, we show where each publication's main author is based in the world.This is to show the geographical spread of research considered.The majority of the publications come from the United States, followed by the United Kingdom.

DISCUSSION
Our results indicate a disconnect between the VA community and that of both transparency and explainability.Research in transparency and explainability is blooming, but when it comes to being applied-or otherwise-connected to aspects around control allocation within the VA community, there is little work done.Yet, the reviewed literature indicates a strong consensus on how important transparency and explainability are for the maturation of the VA field.As we present in our results section, the concepts of transparency and explainability are considered as the means of ensuring trust, robustness, and compliance but also improving efficiency and performance in humanrobot teaming.Even in the wider AI literature, transparency-and its facet, traceability-has been linked to concepts around human control [15,48,62,85].The lack of consensus on guidelines for developing VA systems is one challenge that presents itself with regard to achieving these qualities.Furthermore, the domain to which VA is applied informs the overall context within which transparency and explanations are needed and thus discussed.

Application Domains
In this survey, we identify eiдht application domains within the overarching field of robotics: assistive robotics, healthcare, autonomous vehicles, UAVs, search and rescue, industrial robotics, and social robotics.These were determined manually by judging the available keywords, the described experiments, and the example applications discussed in each article.As an example, let us consider one of these domains and identify an overarching mission that may further inform the values that drive developments in the corresponding industry.In assistive robotics, one mission is to support human users such as elderly adults and disabled individuals, in their daily lives [34].The robotic devices developed can play a significant role in promoting independence and quality of life [22].These devices include, but are not limited to, robotic wheelchairs, robotic feeders, and social rehabilitation robots, that have the capacity to boost morale and human agency in those in need of assistance.Considering this aim, many assistive robotic devices are developed with VA in mind [19,31,49].The user's intent is thus important for the robot to model and carefully consider while operating autonomously [41,71]; a point most frequently echoed throughout the literature of this domain.Without an accurate model of the user and their intended actions throughout operation, the risk of user frustration, mistrust in the system, and subsequent disuse runs high [82].Following our observation that the application domain frames the purpose of transparency and explainability is intended to serve the entire system, we can see that alleviating user frustration and boosting the perception of control are important challenges to target for assistive robotics in particular.This puts the end user at the forefront as the receiver of any information offered by the system.Local explanations-and visualisations in particular-are thus popular implementation approaches for explainability due to their interpretability to non-technical users [9].Furthermore, the modality needs to complement the end-users tasks, for example, overlaying visualisations in such a way that the user's view is not also obstructed, or offering information through the use of haptic feedback in cases where visual stimulus might overload the end user.It is also worth considering cases where the user may have visual impairments, rendering explainability paradigms that rely only on visualisations useless.Considerations for alternative modes of communicating intent and rationale, such as through auditory channels or with haptic feedback, are therefore important.

Dimensions for Explainability and Transparency
The challenges that VA targets in each domain vary and developments are thus driven by different values.We call these missions; each mission that the human-robot team needs to achieve characterises the (for) who, what, why, and, ultimately, how to best implement explainability and transparency in VA systems.

Who Needs to Know?
The starting point for understanding how to best implement either explainability or transparency should always be identifying the consumer of the information (for example, the end-user) [65,97,102].With changing levels of autonomy, observability, and predictability also change.As such, there is a demand by the end user (the operator) for the system to be more transparent while the LoA is high, and less transparent while LoA is low [6,67,79].

What do you Need to Know?
Closely coupled to who is the what that needs to be known: the information that is relevant in the context of our system-and can be provided.This is somewhat complex in the case of VA due to the multiple dimensions along which VA can be adjusted [60].Considering where LoA is changing within the system, there are multiple instances where transparency and explanations are needed.One particularly important case is in goal misalignment [17,18].Control conflicts can occur when the operator and robot want to optimise different things, resulting in user frustration and even task failure [23,82].System failures are also important events that should be reported and further explained, not only for system developers, but also end users and observers.Knowledge of system failures allows the end user to make better judgements concerning the transfer of control, therefore boosting user acceptance and satisfaction [17].For observers, understanding failures informs their decisions and actions around the system.Along the same line, boundaries of both the system's capabilities and human capabilities is also a crucial consideration for understanding how to optimise VA [35].
Beyond goal adjustment, system failure, and capabilities, altering plans or proposing new strategies towards a goal can also induce frustration, thus requiring information and explanation.For instance, the choice of utilising local, global, or cohort explanations depends on the stakeholder [65].An end-user would make much more use of a local explanation, describing why the system made this choice for this particular instance.A practitioner, however, would like to understand the system's overall behaviour and thus turn to global explanations.An observer, such as an auditor of the system, might be interested in understanding how explanations compare across cohorts of input (for example, does an autonomous vehicle's manoeuvres differ during sunset compared to daytime?).However, providing such explanations may not always be possible-or even permissible-out of technical, legal, or even economic reasons [7].
In the VA literature reviewed, the majority discusses post-hoc, local approaches to communicating intent and/or rationale, likely due to the fact that the non-expert end user is the target audience.Some discuss taking a more global approach by using interpretable models; for instance, using GNNs for switches in LoA with the motivation that graph structures reflect relationships between data more intuitively [100].Still, it should be recognised that deep GNNs are not necessarily considered interpretable, and there is a body of literature dedicated to developing XAI techniques for making GNNs understandable [106].This is another indication to us that there is a disconnect between the VA and the wider AI/robotics community when it comes to explainability and transparency research.

Why do You Need to Know it?
Next, we motivate identifying the desired outcome of the consumer after they receive this information; this involves outlining why explainability or transparency is needed.Most discuss the need for transparency and explainability to boost user acceptance and trust; efficiency and performance are also strong motivations.This is in harmony with other observations in the literature, where industry adoption is presented as a driving force for introducing transparency and explainability into intelligent systems [9].Surprisingly few talk about the need for robustness or socio-ethical and legal obligations, but this reason is a valid motivation and should be emphasised more.

How Should
We Communicate?Establishing the who, what, and why can better inform how explanations should be expressed; for example, visual feedback [108], haptic feedback [58], and natural-language presentation [33].In the robotics literature, the discussion of how is often specific to output modality (for example, haptic feedback), and not necessarily to any particular class of explainability or transparency technique (such as example-based explanations).
The properties of the interface contribute strongly to the effectiveness of communicating both intent and rationale.The VA literature explores multiple modalities: visual, haptic, audio, natural and artificial language, as well as multi-modal approaches for various domains.Indeed, domains with different missions will utilise different modalities that apply most fittingly to the context.VA systems, in particular, might require human-robot cooperation for a task where other channels are already overloaded with too much information [5,87].In such instances, a multi-modal approach could be the way forward, as it distributes information across multiple channels for efficient end-user uptake.Information communicated over such interfaces includes intent as well as rationale, contributing towards the explainable component of the system.Table A6 in Appendix B summarises articles that mainly describe communication of intent, communication of rationale, or both.It is perhaps not surprising that the VA literature dives straight into the interface and output modalities utilised for communicating intent and reasons; the target is most commonly the enduser (often the operator) who shares authority with the robot.Interestingly, the practitioners and observers are not thoroughly investigated for optimal generation of intent and reasons (although some do, particularly in the case of AVs, where other road users are considered).Following from the consideration of who comes consideration for why an explanation or transparency is needed.For instance trust and user acceptance or robustness.Who also determines what, for example LoA switching or state of the system.The why and what will then influence how it can be achieved, highlighting the modality that supports it, for example haptic feedback or natural language to name a few.

Recommendations and Research Challenges
In Figure 6, we describe our proposed approach to determining how information or explanations can be presented for VA.Who the information must serve will determine what needs more information and why.
The overarching mission that the human-robot team has to accomplish-which in turn is often dictated by the application domain-should also determine the approach that can and should be taken.After all, explanations serve a purpose for the different stakeholders and should therefore be driven by their values and needs within different contexts of the domain.While this line of thinking is driven by the XAI literature, we argue that similar considerations should be made for transparency.
Currently, there are no standardised means for evaluating explainability [61].Often, researchers take a combination of subjective and objective measurements to assess explainability.Subjective measures include user satisfaction scores and other various perception measurements retrieved from interview responses.More objective measures include task performance and efficiency, amongst others.This combined approach is seen also in the VA literature, emphasising that the motivations within XAI is human centred as well as performance driven.While the trade offs between interpretability and performance are commonly discussed it is a challenged perspective [83].Therefore, it is important to first emphasise that interpretable AI systems are desirable not only for socio-ethical reasons, but also for system performance.Moreover, there exists an opportunity to develop evaluation frameworks which take into consideration both human-factors as well as technical performance.
Similarly, for transparency there are no standardised metrics.The "IEEE Standard on Transparency of Autonomous Systems" [39] was released in 2022, the last publication year included in our survey.The Standard provides multiple requirements, spread across different stakeholders and levels of compliance [102].While its adoption remains to be seen, even if adopted it does not provide any evaluating metrics for the effects of implementing transparency.We therefore advocate for the use of standardised tests from the Human-Computer Interaction and Human-Robotics Interaction communities, such as the "Godspeed Questionaire" [12]; Through such widely-used tests, we can perform a meta-analysis and cross-validation of results across different studies.
We summarise our recommendations and insights as follows: -Guidelines that are to be developed for VA should include requirements for trustworthy AI, with a particular emphasis on explainability and transparency.-VA research and developments should, at least on a basic level, follow best practices from the trustworthy AI literature and explicitly refer to their importance for trustworthy VA.That is, no development or evaluation of a VA system should be considered complete without at least some concern for principles of trustworthy AI. -The implementation approach to realising explainability and transparency for VA (the how) should be guided by considerations for the audience (the who), the information that needs to be made accessible to the audience (the what), and the necessary function this information fulfils for the audience (the why).All of this should resonate with the overarching mission of the VA application domain.
Specific transparency and explainability challenges we see within VA include as follows: -The state and intent of the agents (elements of a ToM) need to be communicated and explained for effective changes in the levels or degrees of autonomy and transfer of control within VA systems.For instance, clear explanations for transfers of control in mixedinitiative robotic systems.-Alignment of expectations and perceptions of the world between all agents, that is, humans and robots, in the VA team.This alignment contains technical problems, for example, how and when to communicate a shared model of the world between humans and robots, but also socio-governmental problems, for example, what should be subject to algorithmic processing and storing.-Systematic experimental evaluation of VA robotic systems and their trustworthy AI aspects in particular.Previously mentioned best practices from human-computer interaction and human-robotics interaction communities can be used as a starting point.Still, an opportunity exists to establish standard trustworthiness evaluation for interactions involving transfer of control to embodied autonomous agents.

CONCLUSIONS AND FUTURE WORK
In this work, we presented a survey on VA robotics literature that considers two significant elements of Trustworthy AI: transparency and explainability.We considered VA as an umbrella term that characterises human-robot teams in which different levels of autonomy can vary dynamically.Hence, in our survey, we included work from the sub-fields of shared control and shared autonomy, adjustable autonomy, and interdependence amongst others.We find that the VA literature does not often consider those concepts explicitly.Based on our survey, we proposed a way of thinking about VA via breaking down considerations for transparency and explainability based on: the mission of the human-robot team; who the stakeholder is (practitioner, end user, observer, etc.); what they need information about (intent, switch in LoA, plan, etc.); why it is needed (user satisfaction, robustness, socio-ethical and legal obligations, etc.); and how it can be achieved (by communicating intent and reasoning through natural language and visual aid, etc.).We consider the application domain to be pivotal, setting all subsequent specifications into motion.
To conclude, it is clear that a greater push towards developing transparency and explainability research in VA is required.More specifically, it should be highlighted that more engagement between VA community and the more general Trustworthy AI community is needed.There is a large body of literature that describes-quite structurally-various methods and techniques that can reveal and explain autonomous decision-making to humans.However, these approaches are mostly tailored towards ML models.Considerations for the new challenges that robotics with VA present are minimal when it comes to transparency and explainability.While humans and robots are sharing initiative in real-time, methods for transparency and explainability should alleviate or support the cognitive demand required by stakeholders in their tasks, within their application domain.Many publications reviewed in this article develop their methods with strong considerations of cognitive workload.
These methods for improving transparency and explainability are not exhaustive, and more opportunity exists to look into novel techniques that are more easily interpreted at run-time such that on-the-fly decisions can be more confidently made, whether that be a switch in LoA or an adjustment in other VA parameters.We invite the VA community to collaborate more closely with the broader AI community in developing suitable techniques for real-time considerations that are interactive and in and of themselves variable too.After all, changing levels of autonomy will require changing levels of information at different points in time, along with considerations of authority and context.Identifying when the most opportune moment to present information is also a critical point to examine.Considering the challenges of cognitive overload and situational awareness, a tactical approach to timing is essential for effective transparency and explainability; this investigation we leave to future work.
Table A2.Brief description of articles that discuss efficiency and system performance as prominent motivations for transparency and/or explainability.
Efficiency and Performance of the Human-Machine Team (N = 12) Publication Description of Transparency or Explainability Solution Presented Fitzsimons et al. [36] Assistive Robotics; Promote effective task training using haptic feedback and visuals to communicate to human users when their inputs are rejected (i.e., overridden) or accepted by the system.Lawless et al. [51] General; Suggest the need for mathematical formulations of the theory of interdependence as a step towards explaining shared human-AI contexts and further ensuring efficient human-robot teaming.Lawless et al. [50] General; At a high-level, discuss the need for interdependence for better teaming and takes a first step and formalising interdependence theory mathematically such that human-robot teams can evaluate team efficiency.Mbanisi et al. [58] Social Robotics; Align human control input to the computed input for remote mobile teleoperation within populated environments using haptic and visual feedback as a means of making the control task (which is shared) more physically transparent to the operator.Mirbabaie et al. [63] Healthcare; Using verbal and non-verbal communication as a means of transparently communicating with healthcare professionals to boost collaborations within the hospital setting.Transparency is characterised also by relevance such that physicians are enabled to fulfil their duties with no delay.Qiao et al. [75] Assistive Robotics; Use LFD for more human-like-and thus more legible-robotic motion, blended with user input for system efficiency and user's improved perception of control.Sanchez et al. [84] Healthcare; Use haptic foot interface to support laparoscopic surgical procedures, making the robotic procedure more physically transparent and alleviating surgeon fatigue.Schleer et al. [88] Healthcare; Use haptic and visual displays as a means of human and robot communication through surgical (bone-milling) tasks with the aim to improve both efficiency and effectiveness.Tröbinger et al. [98] Healthcare; Use haptics together with audio-visual modalities to promote physical transparency in telepresence-based patient diagnostic and intervention tasks.The motivation includes for example, increasing accessibility to quality healthcare while also augmenting doctor competences.Verhagen et al. [99] Search and Rescue; Describe adjustable levels of interdependence within human-robot teams and study its influence on the required level of transparency and explainability communicated using for example, natural language or symbols as visual representations.Wang Z. et al. [100] UAVs; Use the theory of interdependence and co-active design for formal representations and visualisation of relationships within multi-agent task assignment in mixed-initiative UAVs, further promoting effective teaming.Adjusted levels of autonomy are determined using Graphical Neural Networks-selected for collective performance of Neural Networks and interpretability of graphs.Wang C. et al. [20] UAVs; Use theory of interdependence and co-active design principles of Observability, Predictability and Directability to present a human-interpretable planning module in mixed-initiative UAVs and improve efficiency and team performance.
Table A3.Brief Description of Articles that Discuss Robustness as a Prominent Motivation for Transparency and/or Explainability Robustness (N = 3) Publication Description of Solution Presented Cantucci and Falcone [16] Social Robotics; Develop a cognitive architecture for socially adjustable autonomy where the LoA exercised is determined by the robot's self-evaluation of trustworthiness.The architecture is developed to provide comprehensive explanations, which is said to limit the conflicts that arise due to errors and support recovery, thus establishing a robust system.Gilpin [38] Autonomous Vehicles; Propose the need for anticipatory thinking as stress tests for autonomous vehicles-for example, simulating hypotheticals or explaining counterfactual outcomes-in attempts to make them more robust and trustworthy.Marino et al. [57] Industrial Robotics; Promote the need for augmenting human intelligence instead of replacing them for more trustworthy AI systems.Introducing the human somewhere in or around that loop and addresses issues of robustness by ensuring tasks are delegated to the appropriate human or robot counterpart depending on competence.This addresses a few points for each aspect of trust.General; Offer eight principles packed into the acronym EDUCATES intended to guide the development of a machine education curriculum the satisfies socio-ethical requirements.Methnani et al. [60] General; Motivate that effective VA systems require transparency that ensures accountability and adequately exercised responsibility for meaningful human control.

Table A5. Article Split by Type of Transparency: Physical or Conceptual
Physical Transparency (aka "mechanical") (N = 3) Publication Description Sanchez et al. [84] Improving grasping tasks in laparoscopic surgery through haptic feedback for increased "physical" transparency.Results in increased completion and efficiency together with reduction in mental and physical load.Schleer et al. [88] Haptic guidance to assist cooperative surgeries that involve tissue differentiation and bone milling tasks.Tröbinger et al. [98] Authors propose a twin paradigm for doctor-patient remote examination, diagnostics, and rehabilitation.There is a bidirectional telepresence control that requires physical ("mechanical") transparency for natural operation and interactions between doctor and patient.Conceptual Transparency (robot/AI reveals its internal state to the user) (N = 5) Zolotas et al. [109] AR rendered representation of shared controller's internal state onto the driver's view of the world.Zolotas and Demiris [107] alleviating user frustration and further support adoption of shared control systems more commercially.Zolotas and Demiris [108] Explanations are presented here as a means to resolve the discrepancy between human's expectation of the system's plan and its actual output.Alonso and de la Puente [6] Authors observe that different levels of autonomy will present different challenges when it comes to transparency.Suggestions are: intermediate levels of autonomy where tasks are delegated to machine (here trust is paramount); improved interface design; learning from experience.Ramesh et al. [77] Revealing robot vitals as a representation of internal state that contributes to quantifying robot health and subsequent degradation.Visual feedback using head-mounted display for AR showing projected future state of robotic wheelchair along path of travel.Zolotas and Demiris [107] Visual feedback /Augmented /Virtual showing projected predictions Zolotas and Demiris [108] Visual feedback /Augmented /Virtual showing projected predictions Mbanisi et al. [58] Use of haptic interface to offer physical feedback cues to human operators; the intention is to align human control input to the computed input for remote mobile teleoperation within populated environments.Schleer et al. [88] Investigate the usability of haptic feedback channels along with visual displays for tissue differentiation and bone milling tasks using VA systems in laboratory setting.Sanchez et al. [84] Visual aid through position-position mapping on visual display, along with haptic feedback to support laparoscopic surgeon and alleviate fatigue.Tröbinger et al [98] Digital twin of doctor-patient remote healthcare, where haptic and visual feedback is incorporated to make the experience as physically transparent as possible.Pocius et al. [74] Propose haptic feedback as a humanly-intuitive method for an assistive robot to communicate goals to the human user throughout task execution.Fitzsimons et al. [36] Promotes task training in humans through the use of haptic feedback to communicate full rejection-and system override-or acceptance of human inputs.Communicating Reasoning Fearn et al. [33] Using natural language presentation that is understandable by the user and offers some clues as to the reasoning behind particular expectations or predictions made by the system:"I see a kettle and a microwave.Are we in the kitchen?"Communicating Both Olatunji et al. [67] Varying both Levels of Transparency and Levels of Autonomy, finding their configuration is ideally inversely proportional in terms of user preference.Lawless et al. [51] Argue that XAI is an affordance of achieving mutual context in human-robot teams, where context is tied to environmental factors that influence perceptions, beliefs, and ultimate decisions.

Title
Reason for Exclusion Slip suppression in prosthetic hands using a reflective optical sensor and MPI controller transparent material and not trustworthy AI

Fig. 1 .
Fig. 1.Relational breakdown of the major concepts discussed in the background section.

Fig. 2 .
Fig.2.PRISMA flow chart illustrating our methodology from the identification of articles through to final inclusion with reasons for article exclusions where relevant.

Fig. 3 .
Fig.3.Count of publications with "trustworthy" and "AI" in the abstract, title, or keywords as indexed by Scopus over the years.A significant increase is seen between 2015 and 2020.

Fig. 5 .
Fig. 5.A geographical overview of the selected papers based on their main author's affiliation.

Fig. 6 .
Fig. 6.Sketch showing example considerations of who, what, why, and how explainability and transparency is approached in VA.The overall mission influences who the stakeholder is: examples are user, practitioner, observer.Following from the consideration of who comes consideration for why an explanation or transparency is needed.For instance trust and user acceptance or robustness.Who also determines what, for example LoA switching or state of the system.The why and what will then influence how it can be achieved, highlighting the modality that supports it, for example haptic feedback or natural language to name a few.
interdependence not in relation to VA A unifying framework for transparency optimized controller design in multilateral teleoperation with time delays multi-user shared control, no autonomy assigned to the robots Bayesian Rule Modeling for Interpretable Mortality Classification of COVID-19 Patients interdependence not in relation to VA Big Data: From modern fears to enlightened and vigilant embrace of new beginnings no discussion around VA Causality and Its Applications interdependence not in relation to VA Exploring deterministic frequency deviations with XAI interdependence not in relation to VA Industrial Part of 25th International Symposium on Methodologies for Intelligent Systems, ISMIS 2020 conference proceedings Is the future of AI sustainable?A case study of the European Union interdependence not in relation to VA ML post-hoc interpretability: a systematic mapping study interdependence not in relation to VA Moveit! task constructor for task-level motion planning interdependence not in relation to VA Novel business models in support of trenchless cities interdependence not in relation to VA Optimal dynamic auctions are virtual welfare maximizers no discussion around VA Resolving the dichotomy of human and machine intelligence in auditing practices missing PDF Safeguards identification in computer aided HA-ZOP study by means of multilevel flow modelling interdependence not in relation to VA Simulation-Based Digital Twin Development for Blockchain Enabled End-to-End Industrial Hemp Supply Chain Risk Management interdependence not in relation to VA (Continued)

Society 5 .
0 and a Human Centred Health Care interdependence not in relation to VA Temporal models for history-aware explainability adaptive not in relation to VA Toeplitz inverse covariance-based clustering of multivariate time series data interdependence not in relation to VA Towards Achieving Trust Through Transparency and Ethics interdependence not in relation to VA Towards an Explainable Model for Sepsis Detection Based on Sensitivity Analysis interdependence not in relation to VA Towards explainable co-robots: Developing confidence-based shared control paradigms missing PDF Towards Trustworthy Edge Intelligence: Insights from Voice-Activated Services interdependence not in relation to VA TransICD: Transformer Based Code-Wise Attention Model for Explainable ICD Coding interdependence not in relation to VA Unraveling Gap Selection Process During Discretionary Lane Changing by Vehicle Class interdependence not in relation to VA Multi-Dimension Attention for Multi-Turn Dialog Generation (Student Abstract) interdependence not in relation to VA CP-nets-based user preference learning in automated negotiation through completion and correction interdependence not in relation to VA Artificial Intelligence in Manufacturing Systems interdependence not in relation to VA Strategies, incentives, and determinants of corporate social responsibility interdependence not in relation to VA Adaptive human-robot teaming through integrated symbolic and subsymbolic artificial intelligence: preliminary results missing PDF Using Centrality Measures to Extract Knowledge from Cryptocurrencies' Interdependencies Networks interdependence not in relation to VA On the Relationship of the Acoustic Properties and the Microscale Geometry of Generic Porous Absorbers † interdependence not in relation to VA On the Impact of Explanations on Understanding of Algorithmic Decision-Making interdependence not in relation to VA Towards containerized, reuse-oriented AI deployment platforms for cognitive IoT applications i no discussion around VA Computational Urban Science no human-robot teaming (Continued)

Table A4 .
Brief Description of Articles that Discuss Socio-ethical and Legal Obligations as Prominent Motivations for Transparency and/or Explainability

Table A6 .
Article Split by Those that Describe their Approach to Transparency or Explainability as Communicating Intent, Communicating Reasoning, or Communicating Both

Table A7 .
Excluded Paper Titles and Reason