Navigating Real-World Complexity: A Multi-Medium System for Heterogeneous Robot Teams and Multi-Stakeholder Human-Robot Interaction

Real-world robot system deployment is often performed in complex and unstructured environments. These complex environments coupled with multi-faceted global tasks often lead to complicated stakeholder structures, making designing for these environments extremely challenging. Magnifying this difficulty, tasks performed in these environments often cannot be accomplished by a single robot or even single robot type because of the broad range of needs and psychical constraints of the robots. In these cases, heterogeneous robot teams may need to be coupled to human team members to perform the global tasks. From a Human-Robot Interaction (HRI) perspective, this increases the complexity of designing and deploying the system significantly, as now complicated stakeholder structures are mixed with complex robot teams. This paper presents a novel real-world system and interface design leveraging multiple mediums to balance stakeholder needs. To this end, the UI presented here incorporates features that support shared mental models (SMMs), trust establishment and development, and utilizes a centralized data distribution architecture to improve team performance. In addition to the interface, this paper presents a detailed look at the design process and the lessons learned from the perspective of a multi-year, real-world deployed system, as part of a large European project consisting of 21 partners from varying countries and backgrounds.


INTRODUCTION
Designing and deploying robotic systems in real-world environments poses distinct challenges for Human-Robot Interaction (HRI), particularly in accommodating the diverse range of individuals who have a stake in the system.Unlike lab settings, where the interaction is often focused on a single primary user, real-world deployments necessitate consideration of the broader impact of the robot's presence on various stakeholders.
This complexity arises because tasks and goals in real-world settings cannot be viewed in a vacuum; they often have ripple efects that extend beyond the immediate task and its direct participants.For instance, a robot assigned to explore an area does not only afect the operator working with it, but also infuences those managing the area, individuals relying on the exploration fndings, and even those in proximity who may not have a direct interest in the task.These cascading efects underscore the interconnected nature of real-world deployment and the extensive reach of its impact.
Further complicating the deployment is the fact that real-world tasks tend to be more complex and are often performed in unstructured environments.Complex tasks in unstructured environments often necessitate involvement of a larger stakeholders pool since operating in these environments requires more coordination, scheduling, and general support.Moreover, as the task is complex, completion may require a heterogeneous team of robots to meet the varying requirements [17].For example, in performing the complex task of autonomous ship inspection, the BugWright2 project, which serves as the backdrop for the system presented below, requires the involvement of at least four types of robots to inspect a ship's hull.In addition, there are several types of primary end users, and a myriad of other individuals afected.
In this paper, we introduce a novel real-world system and interface design that utilizes a variety of mediums to address the diverse needs of various stakeholders.This design emphasizes the importance of fostering trust between humans and robots and supports the development and maintenance of a Shared Mental Model (SMM) among human team members by leveraging centralized data distribution.While much research has been done on designing an interface for teleoperated robots, the authors were unable to fnd any comparable system in the current body of literature combining multiple mediums in a single system as a means for meeting diverse stakeholder needs.As such, below, we also provide a real world example of how a system with high complexity consisting of both a heterogeneous robot team and complicated stakeholder structure can be deployed.Finally, we document much of the design process and key lessons learned over this multi-year project consisting of several design cycles working within a large European project context with 21 partners from diferent countries and diverse backgrounds.

RELATED WORKS 2.1 Teleoperation
In HRI, the design of interfaces for teleoperated robots presents distinct challenges.Teleoperation, specifcally, introduces complexities such as time delays, limited perception (limited feld of view and depth awareness), and increased cognitive load on operators [4].When working with a heterogeneous robot team and varying stakeholders (each with unique needs and responsibilities) these problems can be magnifed as humans need to work with multiple types of robots each with unique attributes, roles and capabilities.
In working with these complex systems, providing a supervisory control based interface can aid the stakeholders [30].Such systems shift operator focus from direct control of each robot to overseeing the overall mission objectives and managing subtasks [29].This supervisory structure may not only help operators, but non-operator stakeholders as well, by allowing them to understand the global system state without getting lost in too many control details.Indeed, by leveraging a supervisory structure and coupling that with multiple types of mediums for conveying information, we show below it is possible to create a more intuitive and accessible interface for all stakeholders involved.

Stakeholder Identifcation and Inclusion
In designing and deploying a system for the real world, whether it is software, hardware, or a combination of both (e.g.robots), it is important to identify as many stakeholders as possible [3].This does not include merely the end user, but also indirect stakeholders [10,22].One approach to identifcation is starting with the system, and working outward in layers to determine those impacted [2].
Taking an ethnographic approach (e.g.utilizing participant observation, creating personas, and performing expert and non-expert formal and informal interviews, etc.) can aid in this discovery [18].With this data, applying process and knowledge modeling coupled with psychological work analysis can be used to better understand the tasks and goals of those identifed [19].Additionally, cognitive analysis can be used to determine the various tasks' cognitive demands for diferent stakeholders.Finally, this can be visualized in fowchart diagrams to integrate the models into the development cycles [19].
From a user-centered design perspective, the development process is iterative and must continually involve the user [23].As various prototypes are developed, continual stakeholder feedback and testing informs and refnes the design.Stakeholder participation during these cycles, particularly in the early stages, is essential to enhance user acceptance and ensure the system's success when deployed [10,24].

Shared Mental Models
A SMM is a psychological construct representing the common knowledge, beliefs, expectations, and understandings that are held by members of a team and can facilitate or optimize team functioning [6].This shared cognitive framework aids in team based situational awareness and facilitate communication, coordination, and collaboration among individuals by aligning their perceptions and expectations [8,9].SMMs can also enable team members to better anticipate each other's actions, adjust to changes, and work towards common goals efectively, again resulting in improved team performance, particularly in complex and dynamic environments [9].
In leveraging SMMs, it is also essential to demarcate optimal boundaries and defne the elements important to the SMM [6].To start, it is important to distinguish between broad categories such as team member attributes, team member roles and tasks, as well as the relationships between these.[11,21].Within each category, further clarity can be included regarding what information about each category is important.For example, in terms of shared knowledge, understanding a robot team member's capabilities might be more important than knowing what specifc sensors it has.Once the key information is determined, some of that can then be incorporated into the system design to assist in supporting shared mental models for the human team members.

Trust and Shared Mental Models
When examined at an individual level, trust within the realm of HRI emerges as a multifaceted system, encompassing an array of components and interconnections [28].Central to the architecture of trust are three pivotal elements: the trustor, the trustee, and the encompassing context.Additionally, several factors infuence this system from inception and as time unfolds [16].These elements can include factors relating to the robots' design, the trustor's experiences and perceptions of the robots, as well as robot feedback.[7,15,16,25,27].In scenarios characterized by multiple stakeholders and heterogeneous robot teams, these trust dynamics proliferate, manifesting between each stakeholder and every individual robot in unique ways.
In the scenario above, it is conceivable to gain benefts from the added complexity of the environment.In a one-on-one interaction, the human participant forms an initial mental representation of the robot, derived from previous experiences and initial perceptions [1].This state undergoes evolution through continual interaction with, and observation of, the robot (cf., [16]).By contrast, in a scenario involving multiple stakeholders and a variety of robots, mental models are susceptible to refnement not only through direct interactions with the robot, but also through observations and interactions with other stakeholders and robots.For instance, oneon-one an observer witnessing a robot exhibit erratic movements might generally interpret this as an error.Here, however, the observer may interpret the behavior with reference to a team member; perceiving the operator as calm could lead them to deem the robot's behavior as normal.The added context also opens avenues for strategically designing the system to foster trust and support SMMs by linking data presentation to components of the SMMs.

DEVELOPING A MULTI-MEDIUM SYSTEM FOR MULTIPLE STAKEHOLDERS 3.1 Project/Task Overview
The BugWright2 Project is a European project intended to develop a robotic system for the inspection of ship hulls.The global task is to allow teams to leverage robots to perform a safety inspection without exposing human workers to risk of harm.As detailed below, a heterogeneous robot team as well as various stakeholders are required to accomplish this task.Here, the frst robot required is a drone, providing an initial point cloud scan of the ship, which is converted to a mesh and used by the UI and several robots (e.g.providing a global reference frame and global ship structure).Similarly, the Autonomous Underwater Vehicle (AUV) is needed to map the underwater component of the ship unreachable by the drone.Finally, magnetic wheels robots (the crawlers) can be used to measure metal thickness and damage.The operators can then use the UI and the incorporated scan to start marking areas for inspection (either visual inspection by the AUV or drone or plate thickness inspection by magnetic wheeled crawlers).As new information comes in, the operators and technical manager (and potentially a surveyor, if on-site) work together to continue to schedule robot missions until the inspection is complete and a fnal report can be generated.

Stakeholder Identifcation Applied
Identifying stakeholders is a multifaceted and ongoing endeavor.Throughout this project, the team engaged integrated observation -for example, monitoring the safety inspection task with various partners and engaging with inspection teams-and conducting expert and non-expert interviews.These interviews encompassed individuals from diverse industrial and technical backgrounds and specialties.Grounded in the principles of Value-Based Design, the team discerned three primary or direct stakeholders: operators, technical managers, and surveyors.In this context, operators are tasked with taking measurements or providing visual inspection, verifying the inspection locations, and marking the locations.In the absence of robot assistance, this typically involves a team of two individuals -one handling the measurements manually with a handheld sensor, and the other documenting the measurements and locations.Meanwhile, the technical manager oversees determining the locations for thickness measurements and orchestrates the tracking and scheduling of repairs.Ultimately, the surveyor reviews the reports generated by the inspection team and the completed works, making the fnal determination regarding the operating permissions.Given their direct impact on the ship's certifcation, these groups were categorized as direct stakeholders.
Expanding beyond direct stakeholders, indirect stakeholders encompasses a wider array of individuals and groups.Although these individuals might not have a direct impact on the certifcation outcome, these stakeholders are likely present during various stages of the inspection and hold varying degrees of interest in the process and result.This broader stakeholder group includes ship-owners, ship-owner representatives, insurance agents, repair experts, and ship/dockworkers, among others.By identifying and acknowledging these diverse stakeholders, the team was better positioned to understand and address the myriad interests and infuences that shape the inspection and certifcation landscape.

Need-fnding
The initial need-fnding phase was conducted without a predetermined system design.Instead, the focus was on understanding the existing needs, thereby guiding the system's design to address areas where system assistance could be most benefcial.

Direct Stakeholders.
The operators play a critical role in taking measurements and marking locations, necessitating a clear understanding of the inspection targets and the ability to adapt as new information arises.Whether through discoveries made on-site or through directions provided by the technical manager, the operators' requirements include receiving an initial set of targets, having a representation of the ship and the target locations, adding inspection areas/missions as needed, receiving tasks from the technical manager, inspecting the designated areas, tracking progress, and monitoring the robots and taking over control if they need to be driven manually.
On the other hand, the technical manager's role does not involve taking or directly tracking measurements, but rather entails overseeing the data from the operators and assigning new tasks or requesting additional data as necessary.Therefore, it is vital for the technical manager to have a comprehensive understanding of the ship's state as it is examined, the location of issues found, an efcient way of delegating tasks to the operators.
Lastly, the surveyor, who may or may not be on site, requires access to the data gathered by the preceding stakeholders and potentially an interactive platform that enables them to assess whether the ship is operational or if further inspection is necessary.This comprehensive view and interaction with the data are instrumental in making informed decisions about the ship's status and determining the subsequent steps in the inspection process.

Indirect Stakeholders.
While the direct stakeholders signifcantly infuence the ship's operational status, the indirect stakeholders exhibit interest not only in the outcome, but also in observing or monitoring the inspection process.It was discerned that this "observer class" of individuals primarily focuses on the inspection's progress and the evolving state of the ship, particularly concerning the extent and location of any damage or plate thinning, as the inspection unfolds.As such, access to the inspection data and visualizations as they develop is important to this class of stakeholders.

Robots
As mentioned above, some complex tasks require a heterogeneous set of robots to accomplish a global goal.In this particular case, a team of drones, magnetic crawlers, and underwater AUVs were determined to be required to inspect a ship while the ship is docked, but in the water.While most of these robots can perform missions autonomously, the global tasks and identifcation of inspection areas must be done by the operators and the manager, as this takes a particular expertise that cannot be delegated to the robots.

Above and Underwater Water
Crawler.The Altiscan magnetic crawler is a diferential-drive robot, equipped with magnetic wheels.The robot is capable of driving on vertical surfaces, such as storage tanks and ship hulls.The crawler is equipped with a 6-DOF IMU, MDECK 1001 UWB receiver, Axiomtek CAPA310, Rotary wheel optical encoders, and a contact V103-RM U8403008 piezoelectric transducer for the depth sensor measurements.Power, water and data channels are bundled into a single cord, connected to the crawler and the respective inputs.Water supply is essential to ensure adequate surface contact between the piezoelectric transducer, and the metal surface.The crawler can be operated in manual mode as well as in autonomous mode.The underwater version is similarly equipped, except that it uses a pressure sensor to determine its depth and cannot use UWB as it is underwater.Further, from an integration standpoint, it is also important to note that accurate localization of the crawlers is also dependent on the 3D model of the ship created by the drone systems.

AUV.
The AUV is an underwater robot, integrating two IMU MPU-9250 units-each featuring a 3DoF gyro, accelerometer, and magnetometer operating at 200Hz-for main modeling.It also utilizes a SeaTrac USBL X115/X110 for global referencing with a 1km range and 10Hz frequency, a WaterLinked DVL A50 to gauge body velocity, distance to ground, attitude, and integrated position at 8Hz, and an MS5837-30BA Pressure Sensor to monitor depth below the water surface up to 30 bar at 10Hz.Managing the system's computational needs is a Raspberry Pi 3 equivalent System on Chip (SoC).

Interfaces and UI Architecture
3.5.1 Sofware Interface.In this project, the UI was built using a React front-end design, complemented by virtual-reality/augmentedreality (VR/AR) packages like 3JS to create the visual environment.An advantage of this approach is the adaptability of the same UI across AR, VR, and screen-based platforms, necessitating minimal modifcations to transition between diferent views.The application was hosted on a centralized server, ensuring the UI could run wherever there was internet access.Each robot operated with a 3.5.2Hardware Interface.A variety of VR headsets were explored and utilized for the VR aspect of the project.The software's adherence to the WebVR standard provided compatibility with most VR headsets.For the AR experience, the Microsoft HoloLens2 was employed, providing 2k resolution and hand tracking.Similar to the VR, the screen-based system, given its web-based design, could be evaluated using any browser, with Chrome being the primary choice for most testing scenarios.

Simulator.
Given that accessing the robots for UI development necessitates a challenging coordinated efort, a simulation environment was established utilizing ROS and Gazebo.This environment is designed to accurately model the data interfacing with the UI.It accommodates the loading of various ship meshes or structures and manages the messages sent to and received from the UI by the robots.Developed on Ubuntu 18.04 with ROS-Melodic, the simulation operates with identical packages to the robots and outputs equivalent sensor data.Enhancing its portability, the simulation can also be executed via a Dockerfle, establishing a socket connection within the Docker container for UI interaction.

UI Design
Given the variety of stakeholders needs, the typical single medium paradigm was not optimal for this situation.Allowing a choice of medium provided greater fexibility.It also allowed the user to determine which interface will optimize their individual tasks.Moreover, by developing the system in a way that allows both AR and VR mediums, stakeholders can observe the state of the task and robots spatially, which solves some of the problems of teleoperation noted in section 2.1.While this may be particularly helpful for operators, it also allows any stakeholder preferring a more immersive perspective to access the current information [14].Embedding questionnaires into VR also has been shown to be a valuable method to evaluate subjective experiences during its use [13].
Concerning SMMs, the design of the UI can heavily infuence the amount, format, and quantity of feedback provided by the robot system to each stakeholder.As noted above, a SMM represents the shared understanding of various factors, including the robot state, task state, or capabilities.Therefore, designing the UI such that it distributes information evenly and in a way that respects the needs of each stakeholder, can create a greater level of coherence in the SMM [12].For example, by developing a web-based visualization system, stakeholders can observe the state of all the robots and request state information from any device connected to the internet or local network.This not only ensures that all stakeholders receive similar information simultaneously, but that each stakeholder may interactively learn more about the system.The net result is that stakeholders with varying backgrounds can all get the most current system in a way best suited to them.

Centralized Distribution Model: With Multiple Mediums.
Addressing the unique information and interaction needs of diverse stakeholders, while ensuring the consistency of disseminated information, presents a nuanced challenge in a multi-stakeholder environment.Recognizing the distinctiveness of each stakeholder's requirements and the imperative for uniformity in information, the team devised a centralized system, hosted on a remote server.This system is engineered to cater to individualized user needs, disseminating information across varied mediums.By leveraging a centralized architecture, the system maintains the integrity and consistency of the information.
Within this system, the AR operator plays an important on-site role but from a safe distance, as the operator can visually monitor the robots.The utilization of AR technology enables the operator to shift to manual control if the robots experience challenges, owing to sensor malfunctions or increased sensor noise, while being safety located dockside.For example, the AR operator could promptly detect discrepancies between the expected and actual robot locations by leveraging real-time overlays of the localization for each robot type.Concurrently, to efciently plan and govern missions for various robots, the operator still has access to a comprehensive set of controls, analogous to those available to a remote operator using VR or screen.
In contrast to the AR operator, the VR operator, can assume their role from a remote location.The VR operator functions in a more projected environment, where a detailed model of the ship is overlaid with images depicting potential damage after the drones do visual surveillance of the ship.Utilizing VR in this context presents distinct advantages over AR, primarily in the manipulation of the model and immersion in the environment.The immersive nature of VR allows the operator to gain a more accurate and intuitive feel for the size, proportions, and layout of the ships and robots within the spaces they are interacting with.By placing the operators in an even safer environment (i.e. a remote command center), the operator can simultaneously remove a degree of stress or discomfort from completing the task, and focus more on the role of managing the robots and inspecting the ship.
For the technical manager, a screen-based application can facilitate ongoing monitoring of the inspection and assignment of new tasks to the robots, while allowing easier interaction with other stakeholders.Although a VR-based system could support the tasks associated with this specifc role, the technical manager frequently interacts with others in the environment, such as indirect stakeholders or scheduling parties for coordinating repairs.Furthermore, given this role does not have as strong of a requirement of closely monitoring or driving the robots, employing a screen-based can be seen as a better balance of the user needs.
Finally, for the indirect stakeholders, we further extended the screen based application so that it was projectable on a larger medium such as a large monitor or TV, allowing greater collaboration with other stakeholders.As the UI can support any medium and can be accessed from anywhere with internet, this also allows indirect stakeholders to passively monitor activity in private, potentially while performing other tasks.

Factoring In Standard SMM team development techniques.
Varied information needs.As mentioned above, the concept of SMMs is rooted in the idea that improved team performance can occur when various team members share mental representations of various information.Indeed, in the case of two operators with a small variance in duties and roles, a strong overlap of information is extremely benefcial.For example, if a problem with a robot is reported to both of the operators simultaneously, then the one responsible on-site could take control and instantly divert their attention, while the remote operator could anticipate cancelling, pausing, modifying or prevent adding new missions until the robot state is repaired.Conversely, once the robot state is back to normal, the remote operator will know to immediately start with the mission planning again, and the on-site operator can anticipate this as they have the same knowledge.
At the same time, not all models need to be globally shared at the same level.In some cases, there may be information between two stakeholders that is important for improving task performance, but adding that same information to a third stakeholder would amount to extraneous information that could harm performance [6].In these cases, the extra information may divert attention, removing the individual from a fow state, or increase cognitive load.Extending the case in question, taking the owner for example, here knowing the state of the robots would not provide any useful information.The shipowner's primary focus is on the inspection progress and the degree of damage that exists, adding information about the robot team and an overlap of that information would not improve their performance as a supporting stakeholder.As such, this stakeholder could choose an alternate view or minimize certain information, yet the information received would remain consistent with all stakeholders.
Cross-Training.While the system does not track the stakeholder mental models, its design supports classical SMM development techniques such as cross-training.In a team situation, cross-training, is where team members learn about the roles and tasks of the other team members [9].From an HRI perspective, this would mean diferent stakeholders have some means of learning about each other, from each other.While the UI does not provide a crosstraining platform built-in, it incorporates several features that can assist in team members engaging in cross-training.These features include allowing users to share and discuss all state data in realtime and simultaneously.This allows team members to explain diferent state features to other members as they are occurring.For example, the operators can share the robot states with the other stakeholders through a display and explain what states and behaviors are normal, acting as a proxy for training other team members about the robot and the robot's abilities.Similarly, task or missions assigned can be shared globally with comments and notes attached (the system supports both written and voice comments).This allows team members to give details explaining their reasoning and thinking behind the tasks assigned, again fostering the ideas behind cross-training as a direct window into the perspective of other team members.
Briefng and Debriefng.Like cross-training, supporting a system of structured briefng and debriefng can aid in developing a more coherent SMM for team members and increase productively [31].
A key to providing structured briefng and debriefng is capturing holistic and accurate decision data to support the structured sessions.For the UI to support this without overloading team members with data, the design needed to balance the holistic capturing of data with the quantity of data available from both team members and the robots.To this end the UI tracks items like robot state, mission state, mission histories, and mission failures as well as notes and/or reasons for failures.The system can also provide information hiding so that the information can be recalled at a later time without creating more current load on users.When briefng or debriefng occurs, these can be used as concrete reference points, allowing for team optimization and updating of SMMs.

Results and feld Development In Complex Environments
The system presented was developed and tested in an array of complex environments, and end-users as well as expert feedback was received at each iteration step.These environments consisted of real world environments where the technology could eventually be deployed.The testing always included experts from several felds, as well as various industrial partners that could interact and provide feedback on the system.These locations included a shipyard in Greece with a ship hull mock-up, a shipyard in Trondheim with a remote command center and ship, a storage tank in France where industrial partners were actually doing safety inspections concurrent with the research, and fnally at a navy base in Portugal (see fg. 4).While most test cycles were roughly 6-months apart, the design and UI team periodically met between sessions with end users when doing so would speed development.In addition, there were typically weekly design meetings with partners as well.

User Findings.
In each of these environments, the design team leveraged access to industry and expert partners to provide feedback and guidance on the development of the systems.Generally, the primary methods of data gathering here were observation and interviews, as arranging a formal controlled study was not possible due to logistical constrains such as the number of participants allowed in the restricted areas.Instead, industry partners and experts, which included key stakeholders, would often use the provided software and give feedback regarding their experiences as well as recommendations for what was needed.
Based on the qualitative fndings from these individuals, there was an array of feedback regarding both the robots and the various interfaces.For the robots, there were often a lot of questions regarding how the various robots actually worked and several requests for demonstrations.This was particularly true for the crawlers and the AUV because these are fairly new form factors compared with drones.This implied that absent information or some interface, there is little afordance for the operations of these robots even from technical experts.
With respect to the UI, the primary interest from individuals at the work sites was in the AR and VR technologies and how they were used.While the screen based system is useful, the novelty of the VR/AR systems captured the attention of most participants, as many had not really experienced VR/AR especially as it related to ship inspection.Yet, in terms of use, users at the various sites were often both impressed and very excited to visualize and experience the immersive experiences and the desktop application.
At the most recent site testing, the team was able to have a small group of users formally test some of the later stage software and fll in some questionnaires, though again this was highly constrained by the military base context.This testing ultimately resulted in some positive preliminary fndings as there was both a partner operator and a non-partner operator, a partner observer, and a nonpartner observer to test the software.While there is no defnitive questionnaire for trust in HRI yet, there are a few highly recognized sources [5].However, since most of these sources focus on a singluar robot or system, and due to participant time constraints, the team here used these as a leaping of point for generating a basic set of questions more tailored to the experience of this UI and robot team.To this end, the questions primarily pull from the works of [20,26,32].Here, 11 questions were administered using a Likert point scale from 0-7 plus an additional category for "N/A" in case the question did not seem to warrant an answer based on the system presented (see results in table 1 see questions in table 2).
While, again, the sample size was not signifcant enough to provide an in-depth analysis, what was promising is that both the operator partner and the non-partner operator indicated the system gave them enough information to perform their given tasks, that they understood the capabilities of each robot.
Conversely, both operators provided a lower score when asked if the system provided too much information.This indicates that, from an operator standpoint, there was a fairly good balance between the information given such that they felt comfortable with the robots and did not feel overloaded with information.That being said, it should be noted that one of the operators did give a higher score than the other regarding too much information, which may also indicates that more customization needs to be ofered for users to select their levels of detail.

Lessons Learned
3.8.1 Testing at Diferent locations is Essential.In this project, the wide range of stakeholders coupled with a complex task showed that testing at multiple locations was essential for capturing the variance in user needs.As noted above, in this project there were multiple industrial partners.Yet, even though these partners are somewhat symmetrically situated (i.e. they all do safety inspections) distinctions between locations and operations make the end-users unique between partners, even if the overarching role is the same.Some dock spaces may have tighter regulations regarding drone activity and surveillance, some ships may vary in complexity, and some shipyards may not work for diferent sensors.For example, in one location the AR operator could see both sides of the ship because it was docked in a U-Shaped dock, by contrast in another location the AR operator could only see one half.This meant the visualization system needed to allow an operator who could not see the robot take manual control and see the robot through a diferent medium if the localization was not working.Thus, like testing software on diferent platforms, testing the robot system and UI benefted heavily from testing at various sites because each provided unique feedback for the system development.
3.8.2Defining the Shared Mental Models.Similar to the issues above, understanding the new robot-augmented roles, having partners with diferent cultures, and changes in context meant that SMMs and trust levels also varied from partner to partner.For example, one partner on the project does both the inspections as an operator and designed and built one of the robots.By contrast, for other partners the robots were new.In these cases, determining what information should exist within the SMM changes, and subsequently so too does the information shared.This could also be seen in ways of reporting information to the user based on cultural diference.While some users may interpret certain signals or colors one way, others might view it diferently.Thus, the team found, that it was necessary to continually update the ideas and thinking of what data and how to display it to factor in this variance.

Software
Web based development for VR is better for rapid prototyping and development cycles.At the outset of the project, the initial implementation was using Unreal Engine as the basis for the VR and AR development.Yet, even though we had experts at Unreal Engine development on the team, it was not as easy to rapidly change The system provides adequate feedback to accomplish your task.2. The system is reliable in performing the tasks.
3. Each robot is reliable at performing its tasks.4. The system meets the needs to perform the mission/task.5.The system can perform the task better than a team of novice humans.6.The system communicates efectively.7. The robots make sensible decisions.8.I understand the roles of each robot.9.I understand the capabilities of each robot.10.The system provides too much information.11.I feel comfortable using the system.features and prototype new forms of interaction with the robots.Further, it limited the application distribution to interfaces that could run the specifc software and may require a new compilation for diferent types of devices.By shifting towards a web-based approach, the UI development team was able to create greater compatibility with a wider range of devices and, more importantly, modify or update the UI on the fy at time to test new features with the end-user still present.Using a component based front-end framework like React integrated with 3JS also allowed for greater fexibility in feature testing with users in real time.React is a front-end web development framework that takes features of a UI and represents them as independent components that can be modifed or reused.The addition of a new feature often only requires a quick addition of a new component or a quick modifcation or addition of a smaller component to a larger one.In practice, this meant that when a user or expert wanted to a new feature added to the UI, if the feature was small enough, it could be added in a matter of minutes and the change would span all component using that feature.This also mean that users using the UI on difdent mediums would also immediately beneft from these features.

Hardware
The primary limitation in terms of hardware came from the HoloLens2.The HoloLens2 does not have the ability to darken the glass it is projecting upon, so in sunlight conditions, users cannot see any of the data projected.In addition, for this particular application the viewing angle of the HoloLens was too small, which mean that an object projected on a large surface (e.g. a ship) often only showed in the narrow view port.In practice, this meant that seeing the robot localization in real time required a good deal of head movement.There are two potential hardware solutions for this issue.The frst, would be to utilize a VR headset that can project the world through to the user.At the time of writing this, the authors have not been able to fnd a headset truly capable of doing this without distortion from the camera lenses while also running without a high-powered GPU near the device.In this project, the team tried the MetaQuest Pro and the ViveXR, but neither had a sufcient level of accurate pass-through to match the users' needs.Nonetheless, these devices are continuing to grow with companies such as Apple advertising crystal clear pass-through options.The second option would be to attempt using a diferent AR display, such as the MagicLeap2 with shading in bright environments, but this is a new device that still requires testing with the WebXr standard.Finally, one could also consider using an AR kit and a tablet or handheld device, but given the condition outside near a ship, it would be much safer to keep the operator's hand free while performing the inspection.

CONCLUSION
Constructing a system tailored for a multi-stakeholder environment and a heterogeneous robot team, all aiming to accomplish a complex task, is a difcult and intricate endeavor.The process of stakeholder identifcation alone presents signifcant challenges, necessitating careful consideration of all potential entities who may hold an interest in the system or the data it produces.Once stakeholders are identifed, discerning the exact needs and requirements of each becomes another layer of complexity.
Adding to this, the introduction of new roles, brought about by the augmentation with robots, showed that stakeholders and designers had to simultaneously navigate uncharted territories during development.This resulted in roles that diverge from their prior experiences, underscoring the continuously evolving nature of the design process and the need to integrate users and multiple locations in the design process.The novelty and progression of these roles required a dynamic and adaptable approach to interface development, ensuring that the design remained efective amidst changing conditions.
Nonetheless, by adopting a holistic and iterative approach to the problem and refraining from confning the design to a single interface, we have showed it is possible to develop an efective interface that can be deployed in the real world situation for handling a complex task and a heterogeneous robot team.

Figure 3 :
Figure 3: Dashboard View of UI

Figure 4 :
Figure 4: On-site images of from the various testing sites

Table 1 :
User Trust Questionnaire Results