UbiSurface: A Robotic Touch Surface for Supporting Mid-air Planar Interactions in Room-Scale VR

Room-scale VR has been considered an alternative to physical office workspaces. For office activities, users frequently require planar input methods, such as typing or handwriting, to quickly record annotations to virtual content. However, current off-the-shelf VR HMD setups rely on mid-air interactions, which can cause arm fatigue and decrease input accuracy. To address this issue, we propose UbiSurface, a robotic touch surface that can automatically reposition itself to physically present a virtual planar input surface (VR whiteboard, VR canvas, etc.) to users and to permit them to achieve accurate and fatigue-less input while walking around a virtual room. We design and implement a prototype of UbiSurface that can dynamically change a canvas-sized touch surface's position, height, and pitch and yaw angles to adapt to virtual surfaces spatially arranged at various locations and angles around a virtual room. We then conduct studies to validate its technical performance and examine how UbiSurface facilitates the user's primary mid-air planar interactions, such as painting and writing in a room-scale VR setup. Our results indicate that this system reduces arm fatigue and increases input accuracy, especially for writing tasks. We then discuss the potential benefits and challenges of robotic touch devices for future room-scale VR setups.


INTRODUCTION
Room-scale VR has attracted increasing attention for its ability to allow HMD users to perform natural walking while interacting with virtual objects xed throughout a room.Compared with traditional locomotion interfaces using gestural input (e.g., teleportation with lay casting), HMD users' natural walking makes their VR experience more immersive [14,53,60].Example scenarios include life-size scienti c data visualization, architectural design, furniture design, art studios, and o ces.In these scenarios, HMD users can walk around spatially visualized data, 3D models, canvases, and information displays, where walking among the contents should provide better spatial understanding of the room structure and properties of the contents such as scale, arrangement, and shape.
However, the current primary input mechanisms, which rely on mid-air interactions, face the challenge of overcoming a signi cant burden on the user's arm, which quickly increases as hand control accuracy progressively degrades [8,15].This phenomenon is called the Gorilla Arm Syndrome [9,15,31,40] and is well known as one of the traditional unsolved issues of current o -the-shelf HMD setups.Among the various types of common mid-air interactions (e.g., postures and gestures), we focus on planar interaction.Examples include pointing, steering, typing, writing, and drawing, all performed on virtual surfaces (e.g., software keyboard, canvas, notebook).We believe such planar input on virtual surfaces is crucial in workspace activities such as content design, depiction, and documentation.Compared to gestural command input with rougher hand motions, writing a short sentence or sketching a brief diagram on virtual notes requires more precise hand control, signi cantly increasing whole-arm fatigue in the air.
To mitigate this issue, researchers have proposed several solutions.The most naive one is placing physical guidance (i.e., a prop) in the environment to support users' elbows or ngers during mid-air interactions [24,62,64].However, these methods require careful preparation of props beforehand.The second is designing interaction techniques that avoid keeping the arms at high areas by remapping the motor input to the visual content input [22,36] or the use of eye-gaze information together with hand input [72].While such indirect input mechanisms can be easily installed, they generally force users to adapt to unconventional interaction styles, many of which have not proven that precise input can be correctly made.The third is adding a touchscreen held in the user's non-dominant hand [7,18,45,55,67,68], where the user can input the touch motions of the dominant hand onto the touch surface.Although this method can support fundamental planar inputs, it signi cantly restricts the user's posture and interaction possibilities (e.g., bi-manual input).Consequently, while each of the mentioned approaches have their limitations, we want to seek more e ective solutions while retaining their familiar direct interactions with a natural posture.
Besides mid-air interactions, our survey identi ed that ungrounded encounter-type haptic devices using moving physical props have been proven e ective in providing haptics of dynamic VR content around the room [12,46,56,69,71].Although, their primary goal di ers from ours, the basic properties of their prototypes (e.g., human-sized props, locomotion capabilitites) could be leveraged to support the user's arms in room-scale VR experiences.
In this work, we propose UbiSurface, a self-actuated robotic touch surface that can dynamically reposition itself to physically present to the user a virtual planar input surface (e.g., VR whiteboard or VR drawing canvas) arranged in a virtual room and to support the user's accurate and fatigue-less mid-air planar input (writing, typing, etc.) within room-scale VR.This touch surface consists of three types of actuators to achieve ve types of motion freedoms on the touch surface, including basic horizontal translations (x, y) and yaw and pitch rotations, rendering variously arranged virtual planar surfaces in the virtual room.This robotic touch surface is basically positioned around the walking VR users.In the case of prolonged mid-air interaction, it is automatically repositioned and recon gured to provide a physical touch surface that matches the required virtual surfaces (e.g.whiteboard, memo pad, 3D model).
Fig. 1 illustrates a use example of UbiSurface in virtual o ces.The user initially types text using a virtual keyboard on a desk (Fig. 1 a).He then comes to the whiteboard next to the desk (Fig. 1 b) to draft his concepts using handwriting (Fig. 1 c).He can also come back to continue typing text.At every step, UbiSurface's movements and surface reorientation allow it to continue to support the user's sequential mid-air interactions occurring at di erent positions and angles.In this paper, we discuss UbiSurface's design in detail and report its rst prototype using a set of actuators and o -the-shelf materials.We then conduct a technical study to clarify its positioning performance as well as a user experience study that clari es how it supports primary mid-air interactions.
The contributions of this work are 1) proposing UbiSurface, a new repositioning and re-con guring robotic touch surface supporting a user's mid-air planar interactions at various locations in roomscale VR, 2) detailing its rst prototype and reporting technical performance, and 3) demonstrating how it supports the user's primary mid-air-interaction scenarios through a user study.

RELATED WORK
Here, we brie y outline three domains of prior work related to our proposal: mid-air interaction, physically supported VR interaction, and robotic props.

Mid-air interaction
Numerous interaction styles can be used in room-scale VR setups, including hand-held controllers (e.g, [29]), hand gestures (e.g, [1]), and NUIs (natural user interface) (e.g., [35,37,50]).NUIs are generally designed for di erent input strategies, while a hand controller or gestures allow for planar interactions (e.g., typing, drawing, annotation).Although these modes form the current mainstream, several studies have pointed out that they can cause arm fatigue and inaccurate input.For example, Arora et al. reported that the accuracy of mid-air interaction is lower than that with a physical surface or guide [8].Similar e ects also arise in the case of 3D sketching without any physical support, which does not permit detailed drawing experiences [7,33,64].These issues are often called the "Gorilla Arm Syndrome, " which makes prolonged mid-air hand interaction signi cantly more di cult [9,15,26,31,40].To address this challenge, several interaction techniques have been proposed to allow users to perform overhead content manipulation at a more comfortable arms-down posture.For example, input o sets have been explored where VR users' input in arm-down posture (such as chest or belt level) is adapted to content manipulation at eye-level height [13,22,36].Recent advances in eye-tracking systems within VR headsets o er interaction possibilities that combines user's eye-gaze with hand-based selection methods [65,72], which have been reported to be e ective for mitigating arm-fatigue due to more comfortable arm postures [72].Such indirect interactions are quite powerful, yet have not been well studied in teh context of writing and drawing.

Physical support for mid-air interaction
To support mid-air interactions, the e ectiveness of physically supporting the user's arm or ngers has been con rmed [8,41,64].Speci cally, the use of an armrest has been reported to be e ective in mitigating arm fatigue [24,62].However, an issue with this approach is that the user's arm postures are restricted to the supported joints, and appropriately sized and shaped physical props need to be anchored in the work area beforehand.Despite the limitations, this simple idea of adding physical support has been explored for various types of interaction techniques.
First, tracked tablet devices have been employed in the VR interaction (e.g., Virtual notepad [48], Slicing-Volume [45], VRSketchIn [18]) to enable more accurate pointing and VR sketching input on the surface.TabletInVR [55] proposed a range of object manipulations by leveraging a multi-touch tablet's capabilities.Wang et al. [67,68] proposed attaching a multi-touch screen tablet to the non-dominant forearm.Smartphones are frequently used as a VR controller in many studies (e.g., Phonetrolle r [39], VRySmart [38], text typing [11], and navigation [17].These approaches generally raises an additional issue of the non-dominant hand's arm fatigue and precise input is still di cult due to the instability of the touchscreen held in the hands. The second approach uses wearable haptic or force devices.Traditionally, passive or actively controlled strings or wires have been widely studied (e.g., SPIDER-W [47], HapticSphere [66], STRIVE [5]).However, their original idea was to provide the users with force feedback to improve the contact interaction with VR content.While such devices often have su cient sti ness, the range of support is typically narrow, and wearing them is cumbersome and does not allow for on-demand use of the device.
The third approach uses the user's own body as a physical surface.For example, ActiTouch [73] and HandPainter [32] proposed planar content interactions by touching the hand or forearm of the user's non-dominant hand with the dominant hand's ngertip.Handwriting Velcro is a touch sensor supporting handwriting in an AR scenario, and it can be xed on various body parts [21].However, similar to the rst approach, the interactive surface size is generally narrow, raising concerns regarding non-dominant hand fatigue.
Finally, we can also consider utilizing a large grounded touchscreen supporting surface touch interactions [16,54] or providing force feedback to users through the use of actuators (e.g., [51]), however these concepts were designed for xed desktop VR.
Therefore, we nd clear bene ts of the additional physicality.Our approach also adds a physical touch surface, which stands on the shoulders of these prior works but signi cantly expands them by introducing a unique motion mechanism and a su cient surface size that can be e ective in o ce-like room-scale VR experiences.

Robotic haptic device
To represent haptic sensation or force when interacting with virtual content, robotic haptic devices have been introduced.Most of them are originally designed for improving the realism of the virtual world.However, in terms of the use of external robots and adding physicality to the VR experience, this research topic is strongly related to our approach.
While there are numerous robotic haptic devices, here we focus on encounter-type devices in which props run themselves to automatically represent the haptic sensation when the user's nger contacts the virtual content [43].For room-scale VR scenarios, the haptic presentation of entire room objects is challenging because the robotic arm's range is generally limited and such a robot tends to be spatially xed (e.g., [6,27,42,63]) or robots are designed for tabletop-size content(e.g., [52,57]).Consequently, many researchers have used drones to o er room-scale haptic representation [2,3,23,59,70].However, their stability in the air is not su cient for supporting mid-air interaction.
EncounterLimbs [28] uses a mix of encounter-type robot and wearable approaches.The VR user wears a robotic backpack that has a two-joint arm.A tablet-size plate is xed to the arm's endpoint and placed within the user's arm reach.The plate is automatically adjusted to correctly represent physical contact when the users touch virtual objects xed in the virtual world.This device might su ciently support the user's arm in the air, since it supports 28.4 -112 N, depending on the pushing locations.However, it still has a limited surface size, and based on our concept, such additional weight to the body might negatively a ect the user's workload.
The use of ground robots has been increasingly explored.For example, human-sized robotic walls and props representing the haptics of the virtual room infrastructure have been investigated [25,56,69,71].These use moving robots around the room, which can basically follow walking users in a room-scale VR system, and its enclosure physically represents the virtual surfaces.These researchers generally focused on an algorithm of encounter-type prop control, providing proper haptic sensations or improving the experience's realism.CoboDeck is a recently proposed roomscale haptic system using a high-end collaborative mobile robot with a robotic arm [46].While the motivation and the robot's motion degree of freedom resemble ours, their primary contribution was a safe and e ective robot control mechanism, and they did not examine any issue of user experience.In summary, robotic devices' usage techniques and potential for assisting users' mid-air interactions remain largely unde ned.

UBISURFACE
As described at the previous section, the design of a robotic prop supporting the user's mid-air planner interactions in free-walking scenarios has not yet been explored and examined.Thus, We propose UbiSurface, a robotic touch surface that provides HMD users a stable physical touch panel at the same location and angle to the virtual surface that they touch, without interfering with their free walking in the room-scale VR.
Similar to existing robotic haptic devices, we also rely on the external moving robot approach, where the user does not need additional devices, and the robot's relatively large enclosure would o er stable physical support compared to their on-body props.The idea of using an external robot might increase the overall system setup, but it might eventually allow for more exible operation than the wearable or additional controller approaches.For example, the user's setup can be generally compact, and they can call up such robotic devices on demand, depending on the need for mid-air interactions.Furthermore, robots are being increasingly deployed in our homes, o ces, and workplaces (e.g., cleaning robots, warehouse robots, assist robots), which also supports our basic concept.In the following, we discuss our scope, design considerations, and implementation.

Assumption and scope
We rst set our assumption and scope for developing feasible systems.
Our assumption is that VR users are mainly standing or walking in primary use cases of roomscale VR such as o ce, design studio, and life-size data visualization.Presently, we do not support sitting users, based on our aim to simplify hardware design.
Our scope covers mid-air planar interaction such as sketching and writing experiences with ngers or writing tools, which are key actions for VR o ce users.
Our goal is to mitigate arm fatigue and support accurate mid-air planar input.While our system o ers haptic feedback, immersion or realism is not our main concern.Therefore, we do not aim toward achieving the ideal encounter-type haptic device, since its fundamental technical elements, such as control and user goal prediction algorithms, have been extensively studied in previous works(e.g., [46,56,57,71]) whose knowledge could be incorporated later if needed.Another technical consideration is safety, which is particularly brought by our motivation and robotic device approach.We have con gured a relatively large and heavy robotic device (around 20 kg, see Fig. 2 right) with moving parts that may reach the user's chest or shoulders.Such a robot cannot operate faster than 0.4 m/sec due to industrial regulations regarding collaborative robots (Transient Contact Speed Limits, ISO TS 15066:2016 [30]), which we should follow at this early prototyping stage.

Design considerations
We set the following four design requirements.
1. Physical input support: In the real world, people adjust their hand motor control based on physical reaction to touching objects.The issue with mid-air interactions in VR systems are a result of mismatches between VR and real-world sensory feedback, resulting in inaccurate manual dexterity.To correct this mismatch, we add a physical surface so that users can optimize their arm control by moving their writing tool on a physical surface.
2. Canvas-size interaction surface: Supposing basic scenarios and activities using ngers or writing materials (pen, brush, etc.) in virtual o ce contexts, we believe that a canvas-sized surface is required to allow users to apply both hands to text typing, writing a short sentence, and illustrating a diagram on a single surface.Although numerous previous work have used smartphones (e.g., [11,38,39]) or tablet touchscreens(e.g., [18,45,55,68]), they only supported pointing interactions.In addition to the form factor of the surface, a touch sensing system should be installed in order to capture the user's precise on-screen input.

Arbitrary surface repositioning:
To support various interactions in room-scale VRs, the surface should be automatically redirected and repositioned at arbitrary locations and angles in line with our assumption of a standing user.Re ecting the typical use of a drawing canvas, we assume that the movement degree of freedom should support three-axis movements (x, y, and z) and two rotations (yaw and pitch).We consider the surface's roll rotation (i.e., landscape to/from portrait) optional because the z-axis surface motion would be su cient to create a vertically longer input space.Fig. 2 left summarizes the motion degree of freedoms required for the rst UbiSurface prototype.
4. Explicit operation: Safety is our mandatory design concept.We installed basic safety mechanisms such as collision avoidance and emergency-stop systems.Unlike encounter-type prop repositioning systems [6,28,71] based on implicit user goal prediction, we operate the system based on explicit requests from users or prede ned scenarios.For example, the user can call the robot touch surface only if they want it, or the system sends the robot touch surface to the virtual surface that is closest to the user (this method was utilized in the subsequently mentioned user study).This policy simpli es the system's work ow (e.g., eliminating the need for a prediction algorithm).

3.
3.1 Components.Fig. 2 right gives an overview of its con guration: a touch surface, motion tracker, lifting apparatus, and locomotion actuator.For the touch surface, we used a 32-inch PQLabs touchscreen with a lightweight infrared-based multi-touch surface.It is placed at the top of the robotic device, and to change its position, height, and orientation, we assembled a unique lifting apparatus that has a central lift actuator xed inside the metal base unit with four rolling wheels.Two servo motors (Zorsky DS5160 High Torque Full Metal Digital Steering Servo) are installed at the top of the lift, and the touch surface is xed on these servo motors, which allows adjusting

Functionalities.
With the current implementation, the pitch angle of the touch surface can be adjusted in the range of 0-180 degrees.The height can be changed from 0.85 m to 1.45 m.Here, 0.85 m is a suitable height when using the touch surface as a tabletop, while a height of 1.45 m simulates a vertical screen (i.e., whiteboard) for a standing user.The vertical elevation mechanism raises and lowers the touch surface at a speed of 0.035 m/s.The maximum movement speed of the omnidirectional mobile robot was kept to 0.4 m/s, which is slower than the walking speed of a HMD user in a virtual room (about 1 m/s [44]), but it is reasonable for safety considering the need to avoid collision between the user and the robotic touch surface.The weight of the entire device is about 18 kg.

System workflow
Fig. 3 shows an overview of the system work ow.The system is operated using a HTC VIVE VR tracking system, an HMD (HTC VIVE Pro), the proposed UbiSurface device, and a Windows server system.The server system runs a script to acquire the positions and orientations of the HMD and UbiSurface's VIVE trackers.When the HMD user initiates mid-air planar interactions or sends a request, the server acquires the position and angle of the virtual input surface.The Goal Determiner script then determines the goal of the UbiSurface device based on the given information.Next, the Path Planner script calculates the motion path of UbiSurface, where the RVO Path Plan algorithm [61] is employed to avoid any collision with the HMD user.Finally, the server manages UbiSurface's entire travel to the goal.
For the aforementioned UbiSurface control, we employed an explicit control mechanism where the user needs to push a button on the VR controller or gesture following prede ned interaction templates.We used the prede ned control in our subsequently mentioned user study.

Visualization of UbiSurface in VR
We incorporated suggestions from previous studies on robotic props (e.g., [28,58,71]) to mitigate user anxiety and increase safety by visualising the moving robots in the VR view, helping the user to anticipate when and how the robot is approaching.We introduced simple visualizations that show the physical surface and base unit locations of UbiSurface along with a white virtual canvas as shown in Fig. 4.This may reduce the e ect of immersion, but that is not a priority in o ce use cases.

TECHNICAL EVALUATION
We run a brief technical evaluation to understand the basic performance of our prototype.We generated 25 goals with surface heights from 0.9 m to 1.4 m and angles from 0 to 180 degrees and randomly speci ed as targets in a 2.5 m × 3.5 m tracking area.Once the goal is speci ed in a simulator, the system immediately starts moving UbiSurface to it.We measured the four types of errors, position, yaw rotation, height and pitch rotation for the twenty-ve trials.The mean and standard deviation of each error are summarized in Table 1.
Next, we examined the actual speed of each actuator in our current UbiSurface setup.we set 3.5 m travel, 0.9-1.4m vertical elevation, and 0-180 degree rotation as targets for the ominidirectional mobile robot, vertical lift actuator, and pitch angle adjuster, respectively.We measured the actual working speeds 20 times for each actuator.Table 2 shows the overview of the results, which demonstrate the UbiSurface prototype with our work ow can mostly leverage the actuator's original capabilities.
Furthermore, we measured how much sti ness is supported by UbiSurface.We pressed the center and four corners of the surface in four di erent pitch angle conditions (full vertical, 30 degrees tilted, 60 degrees tilted, and horizontal) with a force gauge.Once the device itself is tilted, slipped, or the joint parts are bent, we stopped pressing and measured the force at the moment.Table3 summarizes the results.Because these sti ness data are strongly a ected by the total weight of the base unit and the 3D printed angle adjuster's sti ness, we consider the current setup to be su cient for supporting basic mid-air interactions, and it can be further customized with more weight or stronger hinges, especially for vertical and tilted conditions.

Overview
A user study was conducted to investigate the e ect of physical support by UbiSurface on the user's arm fatigue and input accuracy during mid-air interactions in a room-scale VR world.As a representative planar-input scenario, we designed two tasks: painting and writing.For both tasks, participants interact on a 2D virtual surface using a virtual writing tool operated via the VIVE handheld controller.We validated UbiSurface's performance by comparing it with a conventional mid-air method without physical support (Fig. 5).The study design below was o cially approved by our university's Ethics Committee, including considerations for safety and prevention of COVID-19 infection.
We considered alternative baselines from the approaches of wearable, haptic device, and handheld devices, but we could not nd any suitable competitor.The most important condition for a meaningful direct comparison is that a canvas-sized planner surface be required.There is no suitable wearable force device that we can reproduce on our end.The available haptic devices (e.g., USB touch) have a very small input area for comparison.We have investigated whether a hand-held canvas-sized surface is comparable.However, such a surface is heavy and hard to balance with the single holding point, causing considerable fatigue to the non-dominant hand.Therefore, we decided to directly compare the UbiSurface and conventional mid-air input rather than making incomplete or unfair comparisons.Such a simpli cation of the study design also re ects our ethical considerations since the study attempts to simulate perceptual arm fatigue in our participants.
Based on fundamental prior knowledge, we formulated the following hypotheses.H1.The UbiSurface condition is signi cantly less fatiguing during interaction than the mid-air condition.
H2.The UbiSurface condition is signi cantly more accurate than the mid-air condition.

Participants
We recruited 12 participants (age: 20-24 years old, 4 females and 8 males) from our university who have experienced VR headsets.

Apparatus
An HTC VIVE system was used for VR world rendering and spatial motion tracking.Fig. 6 shows the physical tracking area of around 2.5 m x 3.5 m, an HMD, and the UbiSurface device.We rendered a same-sized VR world using Unity engine.While a multi-touch surface is equipped on UbiSurface, we used the VIVE controller for both conditions as an input device for consistency between the conditions.This simpli ed the focus of this study: we examined the e ect of physical surface support o ered by UbiSurface (Fig. 5).
Fig. 6.Experimental space and apparatus

Task and Design
We designed the painting and writing tasks to examine prolonged mid-air planar interactions.

Painting
Task.The painting task was to ll in shapes displayed on a virtual at canvas, and it was designed to simulate VR users' arm fatigue during repetitive mid-air rubbing motions.Initially, a virtual canvas and a controller were displayed.Once participants approached it and clicked a button with the tip of the controller, an outer frame of a primitive shape (e.g., pentagon) appeared on the canvas.They were required to ll it in using a virtual brush as quickly as possible as shown in Fig. 7 (right).To induce consistent arm fatigue, we asked them to perform the task with only the dominant hand.When they completed lling in the shape and clicked the button near the canvas, the task was formally completed.Ten seconds later, another canvas appeared at a di erent location, and they repeated the same task with a di erent shape.Four di erent canvases were prede ned around the users with di erent positions and angles, and only one of them was displayed during the task.As shown in Fig. 7 (left), the height and angle of the four canvases were 1 m and 90 degrees, 1.1 m and 60 degrees, 1.2 m and 30 degrees, and 1.3 m and fully vertical.Their locations were selected to re ect the UbiSurface's movement ranges and also to simulate a variety of mid-air interaction clusterings [9].The 1.3-m vertical surface was mostly eye-level height, and it might have caused the heaviest upper-arm fatigue for our participant group (average height: 165.8 cm (152-178 cm)).Other conditions were expected to induce milder arm fatigue.This detailed design carefully induced apparent arm fatigue within the acceptable range based on our approved ethical application.To maintain their engagement, the shape and brush color were changed every trial.The painting area size (input area) was identical among all shapes.The independent variable was input condition: UbiSurface or mid-air input.Each participant performed 3 repetitions for each canvas, which resulted in 12 trials per input condition.For the  UbiSurface condition, the physical surface was repositioned to the next canvas during the 10-sec break between trials.For the mid-air input condition, the arm and controllers were operated in the air.In both conditions, the participants could take a rest during the break time.
The dependent variables were the Borg CR10 physical exertion scale [10] (a famous metric of user's perceived arm exertion), NASA-TLX (subjective workload metric), and subjective feedback regarding preference and achievement on a 7-point likert scale.The Borg CR10 scale has been actively used for evaluating mid-air interaction workload (e.g., [15]).

Writing
Task.The writing task was designed to simulate how a user can write precise characters or shapes in a VR space.As shown in Fig. 8, initially, two guide lines were given in the VR space (shown in Fig. 9), and participants were required to write a line between them as accurately as possible without collision with the guide lines.This study protocol was built based on common penmanship practice, and it is also traditionally well known as a steering task in the HCI domain [4].Participants performed this writing task repeatedly on the horizontal and vertical virtual whiteboards, which is a typical example of a task causing heavy Gorilla arm issues [9].The next guide lines appear once a trial is completed, and then the new trial begins.All trials were designed to have the same input di culty with the same pen tip width (0.9 mm (3 px)), tracing length (176 mm (600 px)) and width (6 mm (20 px)) between the two guides.
The independent variable was identical to the painting task.We set two input conditions: UbiSurface and mid-air input.Participants repeated this writing task 36 times per input condition.The rst 18 trials were performed on the vertical virtual whiteboard, and the other 18 trials were conducted on the horizontal one.Therefore, they completed 72 trials in total.The virtual whiteboards were physically rendered for the UbiSurface condition.From a pilot study, we observed slight robot positioning errors that nevertheless a ected writing quality as a result of the VIVE tracker's potential sensing errors.To compensate for such errors, the system automatically adjusted the virtual whiteboard position to match the arranged surface position of physical touch if signi cant errors occurred.Our dependent variables were identical to the previous painting task to capture the user's arm fatigue, and we additionally measured the number of collisions (counted when the user's writing line overlapped the initial guide lines) as a clear objective metric representing writing accuracy.

Procedure
Participants rst signed a consent form and received an overview of the experiment.Next, they had a practice session before starting the main trial.The order of the painting and writing tasks were xed: painting task, followed by the writing task, which re ected the di culty of each task.The order of input methods (UbiSurface or mid-air) was counterbalanced, half of the participants started with the UbiSurface followed by the mid-air input, while the remaining half conducted the tasks in opposite order.A ve-minute break was given between tasks.After each task, they responded to a questionnaire.An additional interview was also conducted after all tasks.The total duration of the study was about two hours per participant.They received payment of about 30 USD according to the university's regulations.

Result
Fig. 10 and Fig. 11 summarize the results of each task.A * in these graphs is a mark of signi cance detected by a Wilcoxon signed-rank test while applying the collected ordinary data.5.6.1 Painting task.CR10 Fig. 10 (a) shows the results of CR10 score in the painting task.The average fatigue of the dominant hand for UbiSurface was 25% lower than in the mid-air condition.This di erence had a relatively large e ect size (i.e., exceeding 0.5 [49] ), but it was not signi cant ( = 0.076, = 0.513).
Preference and Subjective Achievement Fig. 10 (b) shows preference and subjective achievement ("how well was the painting performed?")scores in the painting task.The average of preference score for UbiSurface was, signi cantly, 38% higher than in the mid-air condition ( < 0.01, = 0.744).The average subjective achievement score for UbiSurface was 13% higher than in the mid-air method, which was not signi cant ( = 0.063, = 0.537).
NASA-TLX Fig. 10 (c) shows the results of all NASA-TLX question items.We did not nd any signi cant di erence between the UbiSurface and mid-air input conditions, although UbiSurface had relatively lower scores overall.
Therefore, our results did not support H1 in the painting task.We did not examine the metric corresponding to H2 in this task.5.6.2Writing task.CR10 Fig. 11 (a) shows the results of CR10 score in writing tasks, demonstrating that the physical load of the user's dominant hand in the UbiSurface condition was signi cantly lower than in the mid-air input condition ( < 0.01, = 0.813).
Preference and Subjective Achievement Fig. 11 (b) shows preference and subjective achievement scores.The average preference score in UbiSurface was 195% higher than in the mid-air input condition ( < 0.01, = 0.883), and UbiSurface o ered 117% higher subjective achievement than in the mid-air input condition ( < 0.01, = 0.883).
NASA-TLX Fig. 11 (c) shows the results of all NASA-TLX questions.For all scores other than temporal demand, UbiSurface signi cantly outperformed the mid-air condition ( < 0.01, and > 0.7 for all), suggesting our robotic physical surface signi cantly assisted the VR users' mid-air interactions.
Input accuracy Fig. 12 left shows the mean value of the number of collisions happened in the all writing task trials.The number of collisions with UbiSurface was signi cantly lower than in the mid-air input condition ( < 0.01, = 0.883).Fig. 12 right illustrates representative examples of our participant's writing quality for both conditions, which also support this result.
Our results clearly supported H1 in the writing task based on the CR10 and NASA-TLX scores.Furthermore, the results of input accuracy clearly suggest that H2 is supported.

User study reflections
The system worked well throughout the study.We never observed collisions between the moving parts of UbiSurface and our participants.
The experimental results show that the preference score of UbiSurface was signi cantly higher than that of mid-air input for painting tasks.According to our post-trial interviews, participants preferred UbiSurface because the presence of the physical surface increased the realism of the painting activity, and it allowed them to easily adjust the brush's position and posture perpendicular to the canvas.However, we did not nd signi cant improvement in reducing arm fatigue by UbiSurface in the CR10 scores, contrary to our expectations.Some participants felt that UbiSurface was easier because the correct reaction force from the surface was provided and ne-tuning of the pen's depth position was not required.Others, however, felt that the mid-air input was easy enough overall because they did not need to precisely adjust the brush tip on the canvas surface, since the painting was achieved even if the tip slightly penetrated the surface (i.e., the virtual tip is 5 mm long).This was our intentional setting to compensate for the error of the VIVE controller.As a result, they could still paint with rougher hand movements.
In the writing task, UbiSurface was more e ective and successfully reduced arm fatigue while increasing overall performance.The post-trial interviews revealed that the hand position was easier to x and more stably supported by the given physical support.Participants also commented that it was easy to keep the correctly tilted pen's posture (e.g., approximately 45 to 90 degrees) on the canvas with the UbiSurface touch device for both vertical and horizontal whiteboard conditions.On the other hand, the mid-air input condition did not help them to keep their correct pen's posture in the air, increasing their physical arm fatigue as well as mental stress.We acknowledge that the writing tasks were performed after the painting task, which means that potential and accumulated fatigue existed in the writing tasks.However, the di erence between Ubisurface and mid-air input was su ciently large, therefore we can still suggest that UbiSurface is e ective for operations requiring high precision input and maintains correct penmanship postures with input tools relative to the input surface during handwriting tasks.
Reduced arm fatigue and higher input accuracy resulted from the presence of the physical surface; thus, we expect similar results to be obtained if physical props can be prepared and positioned for every trial.In other words, such primary bene t of passive haptics can be given to any virtual canvas without prior prop preparations when using the UbiSurface's motion capabilities.

Application examples and further customization
By simply adapting our experimental results, art studios, classrooms, design studios should be concrete applications where users can draw their sketches and paint freely around the studio.To an extent, o ce workspaces can also be rendered.For example, the horizontal touch surface can physically render a virtual software keyboard or digital tables.If UbiSurface's multi-touch surface is activated, typing will be the most practical scenario.One note for supporting text typing is that we still need more detailed nger position visualizations or real-time path-trough to let the users know which ngers are above each key.Another o ce-use case is book-end, where the touch action can be used for ipping pages and making annotations.Furthermore, UbiSurface would be useful for operating the control panels of virtual factories or laboratories (Fig. 13), where many control and input opportunities such as buttons and slide bars are enabled throughout the room to adjust the parameters of control and measurement units.Visualized data can generally be manipulated with navigation techniques.Speci cally for life-size data visualization (e.g., air ow around a car or airborne virus spread), the scale information relative to the user's body, position, and room size should be maintained to understand the data scale and context correctly.In such cases, UbiSurface can help data analyzers or presenters to leave handwritten annotations in the spatial data within a life-size 3D scatter plot graph.
Another promising application of UbiSurface is as a test-bed for adaptive user interface design for XR users.For example, in recent years researchers have proposed toolkits to automatically design optimally personalized VR workspaces for individual users [19,20].For these toolkits, UbiSurface can be deployed as a physical tester to investigate how suitable a well designed ergonomic workspace is for dynamic contexts.

Safety
We never observed any physical contact with users while the robot was moving; however, we saw a few cases in which the participants accidentally kicked the stopped mobile robot.This is because the users often mistakenly estimated the distance between their own body and the UbiSurface body from the given spatial cues in VR view.One solution is improving the in-VR device visualization.
We could modify the visualizations by displaying the entire robot body and rendering the estimated user's foot positions.Such additional visual highlights in VR might scarify the user's immersive experience, but it is a necessary cost to maintain proper user spatial awareness and safe operations.

UbiSurface operation and deployment
We recommend using UbiSurface as a future accessory for room-scale VR with o -the-shelf VR headsets and controllers.The locomotion capability is useful not only for mid-air interactions.Only if participants think it is required do they need to invoke UbiSurface, which can be quickly and exibly arranged at the currently working virtual surface.While not using VR applications, the enclosure of UbiSurface is unique yet o ers conventional furniture functionalities such as moving ergonomic tables or exible monitor stands.As discussed above, robots have been increasingly deployed in many places, such as homes, and o ces, so our robotic device approach to supporting VR interactions could be well adapted to the near future infrastructure.Ubisurface uses a transparent panel as the input surface, which is well suited for AR or XR headsets and their applications because it does not occlude the in-VR or in-AR content [34].

LIMITATION AND FUTURE WORK
Positioning errors of robots signi cantly a ect users' precise content manipulations.In cases where the physical surface is placed slightly above the virtual surface, users might not be able to touch the virtual surface.The biggest bottleneck is the accuracy of the motion-tracking system.The consumer-level VIVE tracking system has centimeter-level errors, depending on conditions.We suggest implementing automatic VR content adjustment so that it is always touchable or using a more professional motion tracking system with millimeter-level accuracy to minimize positioning errors.
Our ndings were straightforward and con rmed our hypotheses well.However, the current study could not fully simulate the full user experience of UbiSurface, including waiting time and additional operation costs.For example, we moved the UbiSurface at break time during the study, however such operations might not be viable in more general dynamic scenarios.We acknowledge that our ndings are limited to the fundamental e ects of passive haptic with UbiSurface in a simpli ed room-scale VR context.Future work is necessary to test total user experience, including more technical and practical aspects such as time, user acceptance, a method to call the robot, etc.
The current surface size is practical but still not optimal.Considering use by a VR designer, the current canvas-size input surface might be smaller, and it should actually be the same size as a typical drafting table.One possible solution is using multiple UbiSurface units that operate under a swarm robotics algorithm [52,57,71], where two systems can be connected or separated to render di erent-sized virtual surfaces.Another solution to rendering a larger canvas is by slightly shifting the physical surface and mainly physically supporting the writing part, which might work when writing phrases in a sequential order.To make a surface more functional, it would also be bene cial to activate the multi-touch function of UbiSurface to support regular stylus inputs for thin-line drawing.Nevertheless, we should note that additional highly accurate nger or stylus tracking and in-VR visualizations are required.

CONCLUSION
We proposed UbiSurface, a robotic touch surface that can automatically reposition itself to physically represent a virtual planar input surface (VR whiteboard, VR canvas, etc.) and support users by providing accurate and fatigue-less input (handwriting, drawing, etc.) while walking around a virtual room.We designed and implemented a prototype of the robotic touch surface that could dynamically change a canvas-sized touch surface's position, height, and pitch and yaw angles to adapt to virtual surfaces spatially arranged at various locations and angles.We also evaluated UbiSuface's technical performances and e ectiveness in mid-air painting and writing tasks.The results show that our system performed successfully and reduced arm fatigue while increasing input accuracy, especially for writing tasks.We discussed the results, alternative operations, and future deployment of robotic touch devises for room-scale VR systems.

Fig. 2 .
Fig. 2. The le side shows the dimensions of UbiSurface prototype.The right side shows the overview of UbiSurface prototype

Fig. 4 .
Fig. 4. Visualization of UbiSurface in VR.A: Side view in real space.B: Side view in virtual space.C: View from the HMD user.

Fig. 7 .
Fig. 7. Painting Task: (le ) Overall view of virtual environment.Three green transparent canvases are initially invisible.(right) A participant paints the virtual canvas using the controller.

Fig. 8 .
Fig. 8. Writing task.Two guide lines are presented initially, and a participant writes a new line along with the guides as accurately as possible.

Fig. 9 .
Fig. 9. Writing task.Two guide lines are presented initially, and the participant writes a new line along the guides as accurately as possible.

Fig. 12 .
Fig. 12.The le side shows the number of collisions in task2 (**: < 0.01).The bar charts show the mean with standard error.The right side shows examples of lines wri en by participants.The le side shows the lines using the mid-air method and the right shows the lines using the UbiSurface method.

Fig. 13 .
Fig. 13.Ubisurface helps the operation of controls such as push bu ons or slide bars in a virtual plant.

Table 2 .
Speed of each actuator

Table 3 .
Technical evaluation of the prototype surface supporting force