Enabling Untrained Users to Shape Real-World Robot Behavior Using an Intuitive Visual Programming Tool in Human-Robot Interaction Scenarios

For untrained users, programming a robot that interacts with humans in a real-world scenario is challenging to impossible. However, in order to make interactive robots available in a wide range of domains and connect them with other smart devices, it must be possible to change their behavior in a simple and intuitive way. We present a visual programming tool that builds on top of the open-source Node-RED software and enables users to quickly and easily connect robots with Internet of Things (IoT) devices in order to build scenarios that include human interaction. The tool, called Node-(RED)² (Node-RED-based Robotics Empowerment Designer) is available online and currently supports the humanoid robot Pepper, but is extendable to other robots with very little effort. We demonstrate two real-world use cases of our tool that include Pepper and IoT devices and evaluate the utility of Node-(RED)² via a user study.


INTRODUCTION
In order to succeed in the real world, service robots need to be interacting with humans in an immersive way, but at the same time flexibly adaptable to changing tasks and environments.What users need are robots that not only are able to deal with a large variety of interaction tasks in realistic everyday environments, but also expose a simple way to change their behavior when necessary.One large success factor is that untrained, tech-savvy users can adapt existing and create new behaviors themselves.These users may be experts in the respective real-world scenario, e.g.store or hotel owners, and are possibly interested in trying out new technical innovations, but may have no technical education and experience nor desire to deal with too low-level technical details.
For this kind of users, any tool to create and adapt robot behavior needs to be intuitively usable, hiding implementation where possible, and able to interface a large number of robots and smart devices with which they are supposed to interact.Empowering a user base as large as possible to adapt their robots in their use cases promotes human-robot interaction as a whole, makes robots more usable in everyday scenarios We present a tool that aims to provide such a platform for untrained users, using visual programming of high-level behavior that allows for simple configuration and scenario building for many types of robots and other Internet of Things (IoT) hardware.It uses Node-RED 1 , a widely used open-source IoT visual programming tool which can integrate a wide multitude of other devices using standardized web interfaces.This makes the tool usable in real-world scenarios where untrained users with basic technical capabilities, but no training in robot programming, shall be empowered to change and adapt their robot's behavior using exactly the high-level functionality they need.For users requiring IoT devices like sensors and smart home accessories, this tool provides the necessary connectivity as well.This makes robots and smart devices more accessible to small businesses with respect to entry barriers, so that these users can be relieved from tedious tasks by robots which are, in contrast to most existing deployments, re-programmable and adaptable to changing requirements.
In the following, we will present an overview about visual programming in robotics first, followed by a description of our tool Node-(RED) 2 .Afterwards, we describe the use case of a realistic everyday scenario and how we implemented this using our tool and a real robot.Finally, we present a user study that evaluates the usability and learnability of our tool.

VISUAL PROGRAMMING IN THE ROBOTICS DOMAIN
Visual programming is a paradigm, unlike traditional code-based programming, that allows users to create and manipulate code through a graphical interface, using blocks, nodes, or other visual elements.This approach ranges back to at least 1987 [8] and abstracts complex coding concepts into graphical representations.Therefore it is accessible to a wide range of users, including people with no prior (code-based) programming skills.
As for tooling used in visual programming, a very popular opensource library is Blockly [15] which is widely used in different sorts of areas in robotics [19,20,22] and other areas [13].This library is also used by the Rocksi web-based robot simulator 2 , for instance, for programming simple processes on commonly used manipulators.However, this simulator to date only supports manipulator arms and no other types of robots.
In addition to open-source tools, several robotics manufacturers supply proprietary visual programming tools with their robots.A prominent example is Aldebaran Robotics with the Choregraphe Suite [1] that is sold with Pepper, the robot we use for demonstrating our approach (see Section 4).This tool supports a large number of operations, programmable in a user-friendly way, however limited to Aldebaran robots with no options for robot-robot interaction or control of any smart devices.It can serve as a baseline for functionality desired for a particular singular robot, but is unusable in the context of sophisticated multi-device scenarios as targeted with our approach.
Schuetze et al. [17] created a plugin named XML Visual Programming (XVP) as a Graphical User Interface for an existing tool, however this was not designed as a self-contained ecosystem like our software.The tool is presented for use in very specific scenarios, not as a "standard" tool like Node-RED with a large community.It also does not provide connectivity with other devices and is currently not publicly available.
The ComFlow tool presented in [14] aims in a similar direction of research, but is oriented towards industrial users with no need of connectivity for any other devices.Untrained users in industrial environments can spend a certain time on system setup whereas small-business owners often do not wish to spend time with greater functionality.Hence, they would not want to deal with the additional ComFlow features like Digital Twins and role management, which are desirable for industrial environments, but may get in the way for the kind of users targeted here.In addition, ComFlow has to date not been made publicly available and hence was not usable for the presented research.
Kremenski and Lekova [10] use Node-RED with Pepper as an IoT device communicating via the MQTT protocol in a Brain-Computer Interface setup.They show that programming the Pepper robot using Node-RED is possible and useful.These authors use the MQTT messaging protocol with the built-in nodes, e.g.nodes that expose Python code to the user.This generates behavior graphs that work on a rather low level, where the user gets to see (and potentially may think they are supposed to understand) parts of the source code.In our presented approach, however, we aim to display at most easily understandable configuration parameters to the user.We built all robot capabilities top-down to avoid having the users see any source and complex parameterization.
Lekova et al. [11] present another interesting use case of Node-RED where Nao, a humanoid robot similar to Pepper, is used with a Large Language Model and Speech-To-Text/Text-To-Speech capabilites to act as a therapy robot.This is a very interesting scenario, however the implementation in Node-RED is performed on low level, like in their other work [10], and hence does not target the same user base as our proposed approach.
In the work of Leonardi et al. [12], a visual programming environment is presented that integrates IoT devices with a Pepper robot.Their work enables the user to define rules that are triggered asynchronously by the system whenever the implemented condition is fulfilled.Behavior-wise, this is comparable to Node-RED in its original flavor, that is, without a possibility of following a sequential program flow with a defined start and end point.Our proposed approach provides this option in order to allow for a more intuitive understanding, where the beginning and end of a behavior is clearly defined.In addition, our work makes use of a visual program flow as a tool to allow for intuitive understanding.Leonardi et al.'s work, in contrast, does not require to follow a sequential program flow because of the asynchronous trigger mechanism and hence does not provide an intuitively understandable, traversable graph visualization like Node-RED.
As generic visual programming platforms, several other tools exist in the robotics domain that, however, are not suitable for our requirements.One example of this is the closed-source ZBOS platform 3 developed by Zora Robotics.This platform provides a simple and consistent way of programming various robots and certain smart home devices.The most important components of the platform are ZBOS Rail and ZBOS Control.The different robots can be programmed via the so-called Composer, an integrated visual programming environment.Inside, most robot functions are offered in the form of function blocks.Theoretically, it is even possible to integrate external systems or additional robot functions into the visual programming environment.Although custom functions can be integrated through their MQTT broker, a user-friendly and reusable integration inside of the composer is not possible due to the closed-source aspect of the platform.

NODE-RED-BASED ROBOTICS EMPOWERMENT DESIGNER -NODE-(RED) 2
As explained previously, our approach needs to be based on a visual programming tool that allows for communication between arbitrary IoT devices through standardized web interfaces.Additionally, visual programming should be possible on a wide range of abstraction levels, especially high-level enough to boost usability for untrained users.Since we desire to build an open-source solution, Node-RED has proven as the only solution satisfying all these constraints.It also brings along a significant user base and a large amount of devices and applications that have been catered for so far.However, up to now, Node-RED has indeed been used frequently for IoT-unrelated applications, like in the context of transportation and logistics scenarios [18], but rarely in combination with robot programming.In addition, in most applications showcased in other research (e.g.[10,11]), Node-RED enables only low-level behavior design which may prove difficult for untrained users, since these do not need to know about technical details which also may decrease the tool's usability.Our proposed solution therefore includes visual programming of top-down, high-level behavior as described in Section 4.
Our tool presented in this work empowers users that have interest in technical topics, but are untrained in robot programming to design behaviors for robots.Because it is based on the powerful Node-RED visual programming tool, we name the software Node-(RED) 2 (Node-RED-based Robotics Empowerment Designer, pronounced "Node-RED squared").In the following, we explain the rationales used for designing and implementing this tool as well as the features it provides in addition to the default Node-RED implementation.

Usability
The potential use cases of connected robots and IoT devices have to possess a high degree of usability for the end users in order to reach sufficient acceptance with them.However, the proposed approach targets untrained robot owners and their staff who are not necessarily the same people that interact with the robot in reallife scenarios.For these people, usability with respect to the visual programming platform plays an important rule.The reason is that if 3 https://zorabots.be/zbos-platformthey are not able to re-configure or re-parameterize the robot once requirements change, they likely are not going to buy it in the first place.Therefore, besides the general technical functionality of the tool, usability is the central construct that validates the proposed approach.We perform the evaluation of our solution using a user study featuring different usability metrics on a specific use case, as described in the next section.
Node-RED, by default, uses an event-based approach in order to trigger behaviors (called flows in Node-RED) based on incoming IoT events, for instance, some smart home sensor's measurements exceeding a defined threshold.However, for our targeted application, in particular non-programmers should be able to maintain a clear overview of the application at any time.This is cognitively harder for non-sequential, asynchronous flows than for sequential ones.Therefore, in addition to the features that enhance Node-RED as described in Section 3.2, we facilitate usage by defaulting to sequential control flows using fixed start and stop nodes (see Fig. 2) plus status messages underneath the respective nodes, a decision that has been confirmed as useful in the user evaluation.Nevertheless, asynchronous event triggers are highly useful for more advanced and connected usages.We enable the parallelization of nodes inside the flow which requires synchronization at the merge point.For this, a Join node has been designed so that synchronous flow parts, such as playing an animation and reciting text at the same time, wait for each other to complete.In order to avoid infinite loops in the control flow, a simple loop detection has been implemented as well, so that a warning message is triggered in case of a looped flow with no termination condition.
Because we aim for a top-down visual programming approach on a need-to-know basis for the user, most nodes we implemented for the Pepper robot to demonstrate the use case presented in Section 4 work on high abstraction level.Fig. 3 shows the currently implemented nodes, which do not cover all supported Pepper capabilities yet, and do not expose any low-level parameters or technical data like IP addresses, message payloads and other information as commonly required inside the configuration dialog of built-in Node-RED nodes.It is important to note that this implementation only supports Pepper running on NAOqi version 2.5.There is no support for Pepper with NAOqi version 2.9 yet.
In order to improve the user experience, the Node-RED documentation 4 has been used extensively for the provided nodes which displays detailed information about nodes and parameters.In the user study (see Section 4.2), users found this particularly helpful.Node-(RED) 2 , because it is containerized using Docker, can effortlessly be deployed anywhere on a system that has network access to the robot to be interfaced.This could either be some central server, an internal robot PC or, for instance, a single-board computer attached to the robot as an enhancement of its capabilities.Since Node-(RED) 2 uses a web-based Graphical User Interface, behaviors can be built and controlled from the web browser of any device in the same network without software installation on the user side.
Since all communication between Node-(RED) 2 and robots or other IoT devices happens using a RESTful API, the MQTT messaging protocol and Socket.IO, out-of-the-box connectivity is provided for any device that implements one of these.The only required implementation to connect a new device so that it can be interfaced by untrained users is a JavaScript node description (i.e.call/callback to/from the RESTful API or an MQTT message subscriber, respectively, plus some bookkeeping code and documentation).This follows the Node-RED standards and is hence well documented.On the user side, as soon as this node implementation exists, the user can integrate the new capability in their behavior graph, hence triggering the respective actions on the robot via the web interface once the node is called.
Currently, as we interface a Pepper robot as described in the use case we use for evaluation, we ship a number of nodes with the Node-(RED) 2 deployment (see Section 4) that allow to program Pepper behavior on high level, without the necessity to deal with network protocols or low-level motion and navigation.
Fig. 4 shows the system architecture of the Node-(RED) 2 development platform including the components that interact with the Pepper robot.All components on the host system have been containerized using Docker to allow for quick deployment and portability across systems.The Node-RED container exclusively contains a Node-RED instance and is based on the official Node-RED Docker image.In order to connect to a different robot, the entire host container can remain unchanged, only the REST Client, i.e. the robot-specific interface of its high-level functionality, has to be implemented and exposed accordingly.Likewise, any IoT device can be interfaced substituting the robot container in the architecture diagram.

IoT Device Connectivity
Smart Home and other "smart" scenarios live from connectivity between devices with a high degree of automation.Therefore, we enable and encourage the integration of other, non-robot IoT devices in the Designer as supported natively by Node-RED.In the use case described in Section 4, we showcase an example which aims at making the lives of the end users easier.The scenario uses a thermal camera for measuring the body temperature which is connected as a separate IoT device.
In addition to this use case, to show the versatility of Node-(RED) 2 , we have built another scenario with an off-the-shelf smart lightbulb as IoT device and a Pepper robot.The robot is connected via Node-(RED) 2 and works in a restaurant setting where it greets incoming guests and takes their order -a typical use case to be built by untrained users in a real-life scenario.The accompanying video 5shows this interaction together with the live view of Node-(RED) 2 .

Reusability and Usefulness
Node-(RED) 2 in the setup as described in Section 3.2 is available online on https://github.com/Robotics-Empowerment-Designer/RED-Platform under the Apache 2.0 license.Summarized, the main advantages of Node-(RED) 2 compared to existing tools are a) a defined, synchronous behavior flows that are simple to program and configure, b) simple deployment via containerized architecture and install scripts, c) seamless connection with other IoT devices and robots and d) based on a well-maintained tool that is established in the robotics and IoT community.

USE CASE: PEPPER IN A HOTEL RECEPTION SCENARIO
In addition to the video linked above (see Section 3.3), we have defined another real-life use case which is to be evaluated in detail in a user study described in this section.The following scenario, like the one in the video, work as showcases what is possible with Node-(RED) 2 .Using the capabilities of the robot available as nodes, in principle scenarios can grow arbitrarily complex.For the demonstrated use case, however, we need to limit the complexity of the scenario so that it is manageable to be performed within the course of the user study.

Scenario
In the context of the COVID-19 pandemic, temperature measurement in entry points like hotel receptions has become an additional, important task for staff.Robots are very well suited for such routine tasks, and especially humanoid robots like Pepper work well in such scenarios because they can interact with guests efficiently, verbally as well as non-verbally.Therefore, we chose the example use case to be implemented as follows: • detect if a guest has arrived • once a guest has been detected, check body temperature using a thermal camera mounted on robot • if temperature is above a specified threshold, reject the guest using voice output and using another arbitrary action • if temperature is below the threshold, welcome the guest using voice output and via pointing in the direction of the reception and using another arbitrary action • say goodbye and wave • after a short waiting period, start over Fig. 5 shows an example implementation of this scenario using Node-(RED) 2 .

User Study
In order to evaluate our software in a real-world scenario with real users, we conducted a user study inspired by the procedure of Hoffman and Zhao [6].Note that this user study has been designed as a system evaluation only; no human-robot interaction is performed in the course of it.
With respect to studies using Pepper and the related Nao robot which displays largely similar properties and capabilites, Amirova et al. [2] present a comprehensive review which lists a large number of user studies.While we were able to recruit 22 participants for our study, most studies use less than 20 participants, however there are some that include up to more than 150 participants, e.g.[7].
Mostly, on the other hand, these studies deal with interaction between human participants and the respective robot and do not cover a system evaluation of programming/configuring the robot.Therefore, the study duration per participant is much lower than the runtime of our use case.This allows for a larger base of participants, along with the fact that people can be convinced to participate more easily for evaluating the actual interaction and not for programming the robot, which proved unattractive to many potential participants prior to our study.

Research Question and Metrics.
For the moderated user study, we define the research question as follows: Can individual application scenarios be implemented with good usability using a visual programming tool when programming service robots?With respect to this question, we identified the constructs of usability and learnability as central matters of the user study.They have been chosen because they investigate the strengths of visual programming platforms and cover all important aspects of our research question.
To assess the usability of the application, an approach similar to that of Niermann et al. [14] was chosen.We use the System Usability Scale (SUS) [3] as the main metric in our user study, a well-established and quickly collectable metric amongst researchers in the human-machine interaction domain.This ten-item questionnaire provides a measure of the subjective perception of the system in a short amount of time.However, the scale is designed exclusively as a metric and does not assist in identifying usability problems.For this reason, additional custom questions were added (but not included in the SUS score calculation) to better identify problems that could restrict usability.
The SUS questions, which have to be placed in a specific order in the SUS definition to allow for comparability, alternate between positively and negatively worded questions, ranging from "Do not agree at all" (1) to "Fully agree" (5).As a result, the questionnaire provides a single value that represents a summarized measure of usability which ranges from 0 to 100, however it is not to be confused with a percentage and does not scale linearly.It should be noted that individual questions, considered in isolation, are not meaningful.[16] In addition to the SUS metric, demographical data has been surveyed as well as with the technical affinity, which is determined using the Affinity for Technology Interaction Scale (ATI Scale) [5].Using this scale, a more informed assessment can be made of how well people with less technical affinity can work with the presented system.A low score (1) indicates a low affinity for technology, while a high score (6) indicates a high affinity for technology.We used the short version of ATI with only four questions instead of nine [21] because otherwise the questionnaire (see Appendix A) would have become extensively long.
Learnability is defined in ISO Standard 9241-110 in the way that "the interactive system supports discovery of its capabilities and how to use them, allows exploration of the interactive system, minimizes the need for learning and provides support when learning is needed".[9] Therefore, learnability in the context of the study is evaluated as follows: firstly, by whether the study participants can satisfactorily create the required application scenario, and secondly, how often questions about the application arose during the processing time.
With respect to the identified constructs, we formulate the following hypotheses as to be accepted or declined by our user study: • H1a (usability): On average, the subjects rated the usability of the application as good or better.(SUS ≥ 74 [16]) • H0a (null hypothesis): The subjects rated the usability of the application as worse than good.(SUS < 74) • H1b (learnability): More than 80% of the subjects can implement their own application scenarios after only a short training period (less than 15 minutes).
• H0b (null hypothesis): Less than 80% of the subjects can implement their own application scenarios after only a short training period (less than 15 minutes).

Participants and
Procedure.A total of 22 participants (17 male, 4 female, 1 diverse) could be recruited as study participants.
The subjects were between 18 and 48 years old, with an average of 27 years.All participants were recruited using university-internal mailing lists, internal billboard advertising and personal address.18 participants stated that they were university graduates or currently studying.Of these, 12 indicated a computer science-related discipline.The remaining 6 participants with academic backgrounds indicated a wide range of disciplines, such as mechanical engineering, linguistics or social work.All participants were assigned the same task and performed individually.Initially, the experimenter briefed the participant on the purpose of the experiment, that any participation is voluntary and that they can quit anytime without any negative consequences, and of the task to be fulfilled.The participant was then given an information sheet and gave verbal consent, after which the experiment started.For this, the experimenter first started a 5-minute video explaining the software for the participants to watch.The reason for explaining the software in a video other than personal explanation was to provide similar conditions for all participants.Thereafter, so far without knowing the particular task to implement, the participant was allowed to try out the software and ask any questions about its usage for about 10 minutes, without starting to work on the task yet.
After all questions had been answered, the participant was explained the task as described in Section 4.1 and started to implement it.The participant was allowed to run its implementation anytime on the Pepper robot.The robot was standing inactively in the field of view of the participant the whole time, not exposing any autonomous behavior except for when triggered by the participant, and returned to inactivity afterwards.In case the participant reported to be done with the implementation, but never ran the task on the robot, they were then encouraged to do so.After the task was successfully fulfilled, or after a time limit of 30 minutes, the participants were asked to fill in the questionnaire in Appendix A. After completion, they were thanked for their participation and released.During the whole experiment, no personal data of the participants (video, audio, software usage data) was recorded, except for the answers to the questionnaire, any verbal comments about missing functionality as given in Table 1 and duration of task completion.

4.
3.1 Technical Affinity.Fig. 6 shows the answers with respect to technical affinity.The right area means a higher technical affinity, while the left area means a lower one.For a more intuitive visualization, questions 3 and 4 were inverted only for display of the graph.In addition, one data point was removed due to a missing answer.
Summarized for all participants, we obtained ATI scores as shown in Fig. 7.As can be seen from the ATI scores, the taken sample is not representative of the population which includes everyone potentially configuring or programming service robots in their everyday  Figure 8: Histogram of SUS score frequencies [3] work (e.g.service staff).These people, at large scale, would exhibit a very broad spectrum of technical affinity, and presumedly technical ability, which is represented in the sample as can be seen from the minima and maxima of the obtained scores.However, we assume that, up to today, still only more technically affine persons would be willing to purchase and deploy such a robot in their everyday use case.Therefore, possibly less technically affine persons are unlikely persons to own, program and configure (not to confuse with using) such systems.

Usability.
A key point of our evaluation section are the results of the SUS questions (marked as (SUS) in the questionnaire in Appendix A) which are given in Fig. 8.No other questions have been incorporated in the calculation of the SUS score.As can be seen in the histogram, the user evaluation lies in the range of 72.5 and 97.5.The mean of the whole data set is 84.43 ± 7.71 with a median of 85.0.Therefore, following [16], the usability of the application can be rated as very good.Accordingly, the null hypothesis H0a can be rejected and the alternative hypothesis H1a is accepted.The scatter plot in Fig. 9 visualizes the recorded SUS and ATI scores.However, one entry had to be removed, as no ATI score could be calculated for one subject.Because of the small and technicallyaffined participant sample, a statistical correlation cannot be calculated from this data.Nevertheless, all participants achieved SUS scores of > 70, hence rating the usability of the system as good to very good with no indication of particularly low SUS scores for the lower-ATI participants.
In addition, towards the end of the study, the subjects were asked more generally about their satisfaction as an indicator of good sufficient experience, with the results shown in Fig. 10.Apart from one outlier, the participants stated that they enjoyed working with the robot and could imagine working with a robot again in the future.

Learnability.
For the evaluation of learnability, the main criterion was the successful creation of the application scenario after a period of less than 15 minutes because the test persons were instructed to not to ask any questions about the application during the creation phase.The reason is that this simulates a situation as close to reality as possible.However, the participants were encouraged to consult the integrated help texts in case of problems.
In the course of this study, all participants were able to implement the application scenario satisfactorily.In four cases, subjects felt compelled to ask questions.In all cases, the reason for the question was incorrect use of the Join node (see Section 3.1).After a short explanation by the moderator, these subjects were also able to complete the scenario satisfactorily.From this data we learned that the Join node seems complex for some participants and hence, in the aftermath of the study, improved the node descriptions.Even if the previously mentioned problematic runs are assessed as unsuccessful, 82% of all runs have still been successful without external

4.3.4
Other Observations and Feedback.In addition to the given observations with respect to hypotheses, we asked for the satisfaction with the overall functionality of the application as well as potentially missing functions, see Fig. 11 and Table 1, respectively.During the familiarization phase and the final interview, a total of three persons with prior experience in Choregraphe positively emphasized the intuitiveness and development speed of Node-(RED)

Discussion
The results of this user study indicate that the constructs of usability and learnability have been sufficiently fulfilled because all corresponding alternative hypotheses could be accepted.With respect to our research question (see Section 4.2.1),we can therefore conclude that this can be generally answered in a positive way.The thermal camera as an external non-robot device was interfaced seamlessly; none of the users raised any concerns or had troubles with the fact that two devices had to be integrated in the node graph.The tool integration therefore can be described as transparent.However, the limited sample size of the study does not allow to extrapolate onto the population of potential Node-(RED) 2 users.This study can only serve as an indicator that visual programming in general and Node-(RED) 2 specifically satisfy the requirements of untrained users when programming and integrating robots together with IoT devices in real-world scenarios, like in the video described in Section 3.3.Further research is necessary utilizing a larger user base and more different use cases, including more diverse IoT devices.

CONCLUSION AND FUTURE WORK
The work in this paper presents a software tool called Node-(RED) 2 that allows for simple configuration and scenario building for many types of robots, using standard software.It enables integrating many other Internet of Things devices so that human-robot interaction scenarios can be implemented in a user-friendly way.It is usable in real-world scenarios where store owners that would like to deploy a robot and other untrained, but tech-savvy users shall be empowered to change and adapt their robots' behaviors using the high-level capabilities they need.Since the software is available online, we are hoping for contributions from the community to grow the device ecosystem, so that the tool becomes more useful for all kinds of domains and applications.
In future, we are planning to expand the tool to support multirobot use cases, which opens up a lot of possibilities, but also new issues with respect to deployment and usability.This means that connectivity has to be provided for different kinds of robots of all domains, ideally including online configuration directly within Node-(RED) 2 and auto-discovery of available devices.Integrating high-level functionality of other robots is supported in an efficient way, via implementing only a small JavaScript boilerplate snippet per node that triggers the respective REST server on the robot 6 .Hence, we hope that the community, through this ease of use, will provide connectivity to further robots as well.
With respect to usability, subflows would provide a useful extension to allow for more clarity, as requested by some users during the user study.Eventually, in order to facilitate deployment, a standalone solution like "Adapted Pepper" [4] could be developed that integrates the software within a single-board computer that, in the second increment, could be physically and electrically integrated with the respective robot.1-6 The robot looks confidence-inspiring.
1-6 I think humanoid robots can be used in a useful way.
1-6 Application Scenario I could imagine using a service robot for relief in my profession.

1-6
I consider the temperature measurement functionalty as appropriate for the scenario.

Figure 2 :
Figure 2: Adaptations to Node-RED flow control (arrows: buttons to start/stop the flow)

Figure 3 :
Figure 3: Individual nodes as created for Node-(RED) 2 and the presented use case -the Get skin temperature node interfaces a thermal camera whereas Interaction, Appearance and most of the Wait nodes interface the Pepper robot

Figure 5 :
Figure 5: Example solution of the presented use case

Figure 11 :
Figure 11: Satisfaction with the overall functionality

1 - 6 5 I 5 I 5 I 5 I 5 I 6 I
In your opinion, what could be other scenarios where a service robot is useful?free text Node-(RED)2 I think that I would like to use this system frequently.(SUS) 1found the system unnecessarily complex.(SUS) 1-5 I thought the system was easy to use.(SUS) 1-5 I think that I would need the support of a technical person to be able to use this system.(SUS) 1found the various functions in this system were well integrated.(SUS) 1thought there was too much inconsistency in this system.(SUS) 1would imagine that most people would learn to use this system very quickly.(SUS) 1found the system very cumbersome to use.(SUS) 1-5 I felt very confident using the system.(SUS) 1-5 I needed to learn a lot of things before I could get going with this system.(SUS)1-5The functionality of the application is sufficient for me.1-6 Do you miss any functions, in Node-RED as well as the robot?If so, which ones? free text Pepper Robot I enjoyed working with the robot.1-6I felt (physically) safe while using the robot.1-6Working with the robot has changed my confidence in a positive way.1can imagine working with robots again in the future if I have the opportunity.1-6 Do you have any reservations about this type of robot (humanoid robot)?If yes, which ones? free text

Table 1 :
Feedback for missing functionality 2.

Table 2 :
User Study Questionnaire I like testing the functions of new technical systems.(ATI) 1-6 It is enough for me that a technical system works; I don't care how or why.(ATI) Preconception I would like to work with the robot.