Exploring of Discrete and Continuous Input Control for AI-enhanced Assistive Robotic Arms

Robotic arms, integral in domestic care for individuals with motor impairments, enable them to perform Activities of Daily Living (ADLs) independently, reducing dependence on human caregivers. These collaborative robots require users to manage multiple Degrees-of-Freedom (DoFs) for tasks like grasping and manipulating objects. Conventional input devices, typically limited to two DoFs, necessitate frequent and complex mode switches to control individual DoFs. Modern adaptive controls with feed-forward multi-modal feedback reduce the overall task completion time, number of mode switches, and cognitive load. Despite the variety of input devices available, their effectiveness in adaptive settings with assistive robotics has yet to be thoroughly assessed. This study explores three different input devices by integrating them into an established XR framework for assistive robotics, evaluating them and providing empirical insights through a preliminary study for future developments.


INTRODUCTION
The progress in the development of (semi-)autonomous technologies compelled their incorporation into numerous sectors, reshaping how we live and work.This integration includes scenarios of close collaboration with robotic devices, ranging from industrial assembly lines [4] to personal mobility aids [7].Among these collaborative technologies, assistive robotic arms emerge as a particularly valuable and versatile subset, finding applications across various domains (e.g., [3,26]).
Assistive robotic arms can enhance the independence of individuals with restricted mobility [15,21].These technologies -particularly when integrated with Artificial Intelligence (AI) -empower individuals to perform Activities of Daily Living (ADLs), which often entail tasks like gripping and manipulating objects in their surroundings, without reliance on human assistance [24].However, current Human-Robot Interaction (HRI) research underscores a notable challenge faced by developers: optimizing the autonomy level of assistive robots [16].Striking a balance is crucial, as purely autonomous systems may diminish user interaction and trust, while manual controls could prove impractical for users with specific impairments [12,25,31].Shared control -combining manual input with algorithmic assistance -emerges as such a balanced approach and a promising research direction.
In this work, we explore three different input devices for controlling an assistive robotic arms in shared control applications: • Joy-Con: A motion controller with continuous data input, suited for one-handed operation.• Head: User control input by head-based movements, using continuous data.• Button: A set of assistive buttons to control the robot in an accessible manner with discrete input data.

RELATED WORK
Standard control devices with a high Degree-of-Freedom (DoF), like gaming joysticks and keyboards, often pose challenges for users with severe motor impairments.Addressing these issues requires alternative solutions, such as specialized training or different interfaces [8,28].An approach proposed by Herlant et al. addresses these challenges by reducing the number of DoFs through mode switches.In their successful implementation, a joystick was used to control a Kinova Jaco assistive robotic arm [10].Alternatively, Arévalo-Arboleda et al. introduced a hands-free multi-modal interaction by combining head movements, using a head-gaze based cursor to point, and speech commands to execute specific actions for tele-operating a robotic arm [2].However, while speech commands provide enhanced accessibility, challenges like environmental noise or speech impairments encounter, impacting their effectiveness [18].
The control of assistive robotic arms involves a wide array of possible input devices, each targeted to suit the preferences and capabilities of the respective user [1].Despite this diversity, there remains a gap in the evaluation of these input devices within the context of AI-enhanced shared control applications for assistive robots.
In previous research, we introduced the AdaptiX framework, an open-source XR tool designed for Design and Development (D&D) operations [22].AdaptiX consists of a Virtual Reality (VR) simulation environment to prepare and test study settings as well as a Robot Operating System (ROS) interface to control a physical robotic arm.The framework also includes a general input adapter, facilitating the development and evaluation of different input technologies and devices.Leveraging these capabilities, AdaptiX is used as the basis for this research project.
Through an algorithmic approach, the robotic arm's DoFs are configured to enable precise control with a low-DoF input device.This adaptive DoF mapping, denoted as Adaptive DoF Mapping Control (ADMC), aims to present the user with a set of DoF mappings, organized based on their effectiveness in executing the pickand-place task employed in the experiment (optimal suggestion, adjusted/orthogonal suggestion, translation-only, rotation-only, and gripper).The underlying concept of "usefulness" posits that optimizing the cardinal DoFs of the robot aligned with an input DoF while advancing towards the next goal represents the most advantageous approach.

DISCRETE AND CONTINUOUS CONTROL METHODS
Owning to AdaptiX 's integration of ADMC, users control the robotic arm forwards or backwards along a defined path based on the DoF mapping.Consequently, only a single-DoF input device is necessary for the movement.To choose from the different DoF mapping suggestions of the system, an additional one-dimensional input is required to perform a mode switch action, providing flexible and efficient control of the robotic arm.
Expanding upon the functionalities of AdaptiX, this study focuses on discrete and continuous control methods serving as assistive input devices for the ADMC shared control application.The framework's general input adapter provides a float value (−1.000 -1.000) for the Adaptive Axis and a Boolean trigger for Switch Mode.

Motion Controller
Prior studies [14,23] used a Meta Quest motion controller to interact with the AdaptiX framework.To add to this, we integrated a Nintendo Joy-Con [20], which is well suited for one-handed operation.For the integration, we used UE4-JoyConDriver [6] -a plugin for Unreal Engine 4.27/5.2.The plugin creates a connection between Unreal and Nintendo Joy-Con and provides sensor data such as accelerometer, gyroscope and Inertial Measurement Units (IMUs).
The left controller was selected for its balanced layout, accommodating both left-and right-handed users.The thumbstick -providing continuous data -was tilted up or down to move the robotic arm forward or backward.The mode switch is performed by pressing the Up-button of the controller right beneath the thumbstick.This design ensures single-handed control of the robot while preventing simultaneous movement and mode switching for enhanced usability.

Head-based Control
This control method eliminates the need for extra, specialized input devices as it utilizes orientation data from a device the user is already using -the Head-Mounted Display (HMD) [29].Furthermore, it offers an accessible approach by allowing users with impaired hand motor function to operate the robotic arm.
The HMD's internal sensor technology, specifically the IMUs, facilitates the measurement of head rotations along three axes (roll, pitch, and yaw).This coordinate system is anchored to the object, positioned at the center of the user's head.Positive and negative rotations are possible around each axis, facilitating the mapping of six distinct actions to the corresponding axis rotations.
When the user tilts their head in a positive manner (pitch; rotating the head upwards), the robotic arm is advanced along the DoF mapped trajectory.Conversely, tilting the head in the opposite direction causes the arm to move backward along that path.Rolling the head to the right triggers the mode switch action, selecting the next ADMC suggestion.
Along each head rotation axis, a 20°resting zone has been set to prevent unintentional controlling of the robot.In this application, the user's head serves as a continuous data source for controlling the robot, akin to a joystick or the Joy-Con's thumbstick.

Assistive Buttons
Integrating the Microsoft Xbox Adaptive Controller [19], emphasizing flexibility and accessibility, enables the use of assistive buttons (e.g., Logitech Adaptive Gaming Kit [17]).These can be quickly and flexibly arranged to ensure comfortable operation by the user.
Similar to a gamepad control for discrete input data, the elementary actions for moving forward and backwards are mapped onto the adaptive buttons.The buttons marked Arrow up and Arrow down are mapped for moving the robotic arm, while a button with an A-marking was assigned to the mode switch.

STUDY
This preliminary study gathered initial user experiences with different modalities and operating modes for AI-enhanced assistive robotic arms.Through a controlled Mixed Reality (MR) user study involving 14 participants (6 female, 8 male), we systematically compare the advantages and disadvantages of the selected input methods.Four participants had prior experience with robotic arms.

Study Design
We employed a within-participant experimental design, with the control method as the independent variable, comprising three conditions: (1) Joy-Con, (2) Head, and (3) Button.Each participant underwent eight trials per condition.To mitigate the potential impacts of learning and fatigue, the condition order was fully counterbalanced.

Apparatus
Our study used the AdaptiX [22] framework to integrate and assess the selected control methods.We operated the framework in its MR mode, employing the Varjo XR-3 [29] HMD and a Kinova Jaco 2 [13] assistive robotic arm, as shown in Figure 1.We connected the Varjo XR-3 and all input devices to a Schenker Media Station computer to facilitate this setup.Furthermore, we established connections between the Schenker Media Station, the ROS server, and the Kinova Jaco 2 through a wired Local Area Network (LAN).The small orange markings are potential starting points for the rounded block.

Procedure
Before starting, participants received a detailed explanation of the project's objectives and the tasks involved.Each participant provided informed consent for their participation, including recording video, audio, and any other relevant data.A study administrator, overseeing the experiment on a laptop, provided instructions on using the hardware and the study environment.Once set up, participants followed command prompts within the MR environment.For each of the three conditions, the following steps were performed: (1) Participants were given a written and standardized explanation of the control method used in the current condition.(2) Participants conducted eight trials, grasping the object and placing it on the target surface.(3) Interview and questionnaires.After completing all conditions, participants ranked the three control methods from most to least preferred and explained their decision.The study concluded with a de-briefing.

Experimental Task
The experimental procedure builds on prior research that employed the AdaptiX framework (refer to [23]).The present study expands the configuration to a real-world environment, replicating a typical pick-and-place scenario.
To commence each trial, the study administrator positioned an object on a table.The participant aimed to navigate the robot from its initial location to grasp the object and deposit it onto a designated target area on the same table.For each trial, the object's starting position varied among eight possible predetermined locations.These positions were randomized in their sequence.We employed uniform rounded block shapes as objects to ensure impartiality and trial comparability, eliminating bias and allowing for consistent trial comparisons.Users could adjust the robot's DoF mapping by toggling between modes to fulfill the task.Following a successful execution, the object was removed, and the robot returned to its initial position.The object was then placed in a new starting position for a subsequent trial to begin.Upon completing each condition, we assessed workload using the NASA Raw-Task Load Index (Raw-TLX) questionnaire [9] and measured the five dimensions of the Questionnaire for the Evaluation of Physical Assistive Devices (QUEAD) [27].The task completion time was recorded from the moment the participant initiated the movement of the robotic arm until the block was successfully placed.

RESULTS
This research focused on collecting subjective feedback from participants to improve the future development and integration of control input methods for shared control applications.The presented study encompasses a total of 336 (14 participants × 3 control methods × 8 trials) measured trials.

Evaluation of Physical Assistive Devices
The QUEAD included five individual scales (7-point Likert).Friedman tests for individual dimensions revealed significant main effects for all dimensions.Post-hoc pairwise comparisons indicate significant differences between Head and Button for all five dimensions as well as between Head and Joy-Con for PU, PEU, E, and C. For Joy-Con and Button only PU and PEU show significant differences (refer to Table 1 for detailed scores).

Individual Ranking
All participants -except one -ranked conditions

Subjective Feedback
Participants noted an increased mental workload during the Headbased interaction.P01 highlighted that the movement execution for "forward felt opposite to the suggested arrow direction".Additionally, P01 got quickly distracted by a conversation with the experimenter, and P02 required substantial assistance due to difficulties in perceiving the arrows and mapping them to the head-movement direction.Participants P01 -P04 suggested introducing an additional mode switch to display the previous suggestion rather than presenting the next one.Participants P04, P11, and P12 preferred a non-continuous control by moving the head (i.e., only stop and go) to "prevent unintentional robot control when returning their head to the zero position" (P11).Similar to the Head-based interactions, participants P01 -P04 mentioned a discrepancy between the suggested arrows by the system and the control input.In certain situations, the system suggests movements in the user's direction.To move the robot along this trajectory (forwards), the thumbstick of the Joy-Con or Arrow Up assistive button had to be pressed, which felt "discrepant".Participant P04 suggested using the thumbstick of the Joy-Con instead of the selected button for mode switching, for example, by tilting it sidewards.
Additionally, it was observed that specific initial placements of the object were perceived as disadvantageous compared to others, as the robot is fixed in place and has to perform -for the novice users -un-legible movements to reach the target.

DISCUSSION
All participants were able to control the robotic arm with each input device to fulfill the project task.Yet, the study's findings indicate that the effectiveness of the Head-based interaction method for controlling the robotic arm is relatively low compared to both hand-operated input methods.A notable insight derived from these results is the potential issue of the Varjo XR-3 HMD being too bulky and heavy for sustained and precise Head-based control.To address these concerns, a more lightweight and comfortable solution, such as utilizing external IMUs for Head-based interaction [11,30], could be considered.
Nevertheless, the HMD remains essential for visualizing directional cues, even with the integration of IMUs.Looking forward, advancements in technology are expected to yield significantly more compact and lighter devices, thereby enhancing user comfort and immersion.
Further, participants pointed out a discrepancy between the robot's movement direction and the mapping of user inputs.This could lead to an unclear mental model, particularly since the robot is controlled in a first-person view.To counteract this issue, a more extensive familiarization phase might be beneficial.

CONCLUSION
The input methods Joy-Con and Button represent promising approaches for controlling a robotic arm in a shared control application.Notably, both hand-operated input methods -irrespective of whether they provide discrete or continuous input data -(1) reduced perceived user workload and (2) improve Perceived Usefulness, Perceived Ease of Use, Emotions, and Comfort.These findings hold valuable implications for HRI researchers involved in the design of input technologies for assistive robotic arms.Future research efforts should prioritize the nuanced analysis of both quantitative and qualitative feedback obtained from focus groups.This comprehensive approach aims to refine and develop optimal methods for robot motion control, with the overarching goal of improving usability, safety, and end-user acceptance of these technologies.
Still, given the diverse likes and dislikes of the participants, future development of adaptive input control methods should -in line with Burkolter et al. -include individualization options to increase comfort and end-user acceptance [5].

Figure 1 :
Figure 1: Overview of the study setup.The participant is wearing a Varjo XR-3 HMD and controls the Kinova Jaco 2 via head movements.The goal is to grasp the light-colored rounded block and place it on the large orange square in the middle of the table.The small orange markings are potential starting points for the rounded block.

Figure 2 :
Figure 2: Comparison of the task load dimensions for the three different control methods: Joy-Con, Head, and Button