How do Users Experience Asynchrony between Visual and Haptic Information?

In this paper, we investigate the effects of asynchrony between the visual and haptic feedback in virtual reality (VR) on user experience, specifically focusing on understanding users' awareness of this asynchrony and its effect on their level of satisfaction. Using Unreal Engine, we created an experimental setup to adjust the timing between these sensory inputs. Our experiment featured a VR dodge game that provides haptic feedback on the body when the player is hit by a multitude of virtual objects. Conducting a targeted, small-scale user study, we aim to understand in what ways an introduced asynchrony influences the VR experience. The results highlight the perceptibility of asynchrony, which significantly affects the overall user experience. Nonetheless, we also find an asymmetry that benefits scenarios where haptic feedback precedes visual cues. Furthermore, our findings suggest that users can generally accept minor levels of asynchrony without significant disadvantages to their satisfaction. However, it is interesting to note that even when users cannot explicitly identify any asynchrony, they might still experience a slight decrease in satisfaction.


INTRODUCTION
Many advanced human skills that are in high demand today, such as performing surgery, are beyond the reach of automation with today's technology.Although we can often engineer robotics solutions to execute the necessary physical actions to perform these tasks, replicating complex decision-making abilities akin to the human brain remains a tough challenge.However, replacing people with robots in an indiscriminate manner is neither feasible nor generally desirable.Instead, people's presence in remote environments can be achieved by telerobotics, which enables humans to perform physical actions remotely through networks.While the control of robots over a network has already seen plenty of use, including space explorations as far as Mars, a critical element has often been missing in these applications, namely, the sense of touch.
With the advent of Extended Reality (XR), the remote control of sensing appendages has become both feasible and sought-after.From surgery robots to VR games, tactile feedback has become a key element in enhancing the control and immersion of the experience.This has sparked significant interest in this concept for the Internet, in what is now known as The Tactile Internet (TI) [13].
The challenge, however, extends beyond the hardware to include networking.According to the IEEE 1918.1 "Tactile Internet" Standards Working Group, tactile feedback requires very low latency, sometimes down to single-digit milliseconds [13].The reason for such low-latency requirements can be seen in tasks involving a control loop, like the well-studied balancing of a stick on a finger [5].Tactile feedback finds its relevance in various scenarios beyond control loops, with entertainment being a notable example.The desire for more immersion, such as experiencing a sense of actual presence in a game world, is a common aspiration.Tactile feedback achieves this by adding an additional sense to allow for a deeper connection to the activity.
While not all scenarios require low latency, the most demanding ones do, thus, introducing challenges in implementation.Even looking aside from overhead that would occur in practice, these latencies are approaching the boundaries set by physical laws, with the speed of light as a notable constraint.For example, assuming a desired maximum round-trip latency of 10 milliseconds, the physically longest distance a signal could be transmitted would be around 1500 km, which imposes a severe limitation on the distance over which something can be controlled with the desired latency standards.Thus, alternative strategies are required for controlling operations beyond such distances.

Relevance of studying asynchrony
Multimodal feedback is highly desirable to achieve immersion for a physically active user in a remote environment, but it is especially crucial in a scenario where the user's visual and tactile senses have a coherent experience, meaning primarily that information from both senses should be received not only correctly but also synchronously.
Multimodal synchrony is a long-standing research topic that has answered a vast number of scenarios with a wide variety of solutions [22].Experiencing first-person activities in a remote environment through both visual and tactile senses is a fairly new field [14], especially in a scenario where both action and feedback should be intuitive and not require a learning phase for the user.
Due to the constraints of both remote robots and local haptic rendering devices, acting in a remote environment through robots introduces new types of delays that differ from visual delays.Motion and feedback prediction can help with such new delays.However, in this situation, misprediction is inevitable and reduces the coherence between visual and tactile information.Therefore, before applying prediction and overcoming its challenges, we believe it is critical to understand how much human tolerance there is for asynchrony between tactile and visual information.To ascertain the limits of asynchrony that users can accept and to assess if a system with noticeable asynchrony can still be deemed acceptable.We explore this question in a VR game-like environment, namely "dodging".It is a game where you must move quickly to avoid or escape hits by moving objects.

Research Questions
Our paper investigates several research questions towards understanding the acceptable levels of asynchrony in VR games, particularly those involving dodging actions and responses.To that aim, we develop a VR game that employs haptic feedback, which is an interaction feature that blends tactile sensations with immersive gaming experiences.We consider the following main research questions: • RQ1: What is the impact of asynchrony between visual and tactile information in VR environments?• RQ2: How does the user experience the tactile information rendering both behind and ahead of the visual information?
By answering those research questions, we aim to contribute to the broad research area of TI with a particular focus on prediction mechanisms to compensate for the effect of latency.
The designed game is notable for its novelty in evaluating the effects of asynchrony between haptic and visual cues, thus shedding light on how latency affects the user experience in such gaming scenarios.These efforts aim at collectively contributing to a better understanding of user experience in systems that make use of multimodal interaction.It is crucial to emphasize that the goal of this study is to provide a high-level overview of an initial set of trends observed as an outcome of the user study conducted with a smaller group of participants, and not to make conclusions about features or characteristics that apply to a larger population.

RELATED WORK
There has been a great interest in understanding how we use our senses to experience the world around us.Vision and touch, in particular, are the primary channels to connect with our environment, leading to numerous studies investigating how these senses work together.For example, Kassuba investigated through functional MR Imaging whether there is functional asymmetry for visual-haptic interactions in the context of object recognition [17], and shows that vision holds a greater share than touch.It has also been shown that the availability of vision enhances tactile sensitivity [7].Vellingiri and Balakrishnan [28] studied the subjective discrepancy between spatial cues in visual and haptic modalities, not necessarily arising from asynchrony.More recently, through the establishment of TI [13], how such multimodal feedback is perceived in virtual environments [10] as well as in remote environments through robots [19] has been the topic of study.The human hand is frequently the center of attention for this kind of feedback.The TI surveys by Van Den Berg et al. [27] and Promwongsa et al. [24] are examples of this focus.
The majority of works that investigate the delivery of tactile information by stimulating other parts of the body are primarily delivering information that has no natural interpretation but must be consciously interpreted by the user.Examples for this are belts [15,23], a wearable robot arm [1], and full-body suits [20].Also, tactile vests like the one used in this paper were used in this manner Jones et al. [16], while Elor et al. [11] explore the delivery of emotions through touch.Our work focuses on touch feedback through the vest that provides a tactile experience that is coherent with a concurrent audiovisual experience.The effect of such tactile presence on immersion has been studied by Carroll and Yildirim [6] and Cui and Mousas [8].However, these studies did not consider the effects of latency.
Researchers that investigated the effects of latency in the actionfeedback loop have frequently used the Phantom controller [2,18,26,29] and explored how latency is experienced in a variety of use cases, including networked environments.Here, users interact by holding a pen-like controller with 2 or 3 fingers and experience force feedback when they move it.The shape implies that all interactions involving this controller require a learning step.
A notable method used in reducing the effects of latency is the implementation of predictive algorithms.Boabang et al. [4] implemented predictive modeling of haptic feedback based on surgeons who performed needle insertions.Their assumption was that most feedback events would arrive on time and that the feedback would only have to hide a limited number of missing or late packets.Boabang et al. [3] retain the assumption that only a few packets are late or lost, but expand their work to the scenario of tying knots in remote surgery by comparing offline and online prediction schemes, meaning that the prediction model is either pre-trained or trained as data is from the procedure is recorded.Mondal et al. [21] proposed an event-based forecast module consisting of a neural network and a reinforcement learning unit for haptic feedback.This module predicts when a user touches a virtual object with a certain texture and gives different haptic feedback based on the texture.None of these papers measured performance in terms of the user experience but solely recorded objective measurements.
Our study is inspired by the limitation of TI but aims at the subjective experience of haptic feedback.
Unlike the ideas of feedback forecasting in an action-reaction loop, we explore the challenge of tolerable visual-haptic asynchrony, where the haptic feedback is not limited to the hands but also through vibrotactile feedback on the chest and back.The exploration of asynchrony is considered an important step to understanding the synchronicity requirements for predicted visual-haptic feedback.To explore these, we create scenarios where the haptic information precedes or follows the visual information.

THE GAME & ENVIRONMENT
In this section, we first introduce our setup.We then provide the details with respect to the game design and the implementation of the different components.

Setup
To explore how humans are affected by asynchrony between visual and haptic information delivered both through the hands and the body, we design a game around a head-mounted display (HMD) with associated hand controllers and a haptic vest.
Our system consists of the bHaptics Tactsuit, which delivers basic vibrotactile feedback through 40 miniature motors integrated into the suit's fabric.As our HMD, we utilize the Oculus Quest VR.We design the VR environment for gameplay and testing using Unreal Engine 5.

Design
To explore participants' tolerance to varying asynchrony levels between visual and haptic feedback, we design a first-person dodging game where the user is immersed in the virtual world through both visual and haptic information.The snapshots from the game are illustrated in Figure 1.As the essence of dodging games lies in avoiding collision with obstacles [25], the timing between an action and its response is known to be crucial, and therefore, such a game provides a suitable testbed for our evaluations.
Just like any other dodging game, players are in an alley and experience obstacles (in our case, virtual balls) flying in various patterns toward them.In order to enable the players to react naturally in the virtual environment, we assume that they are standing up and have the ability to move their body in all directions (3DOF) towards avoid the virtual balls.Their ultimate goal is to avoid being hit by those balls on any part of their body, either by hitting the balls with a pair of paddles held in their hands or by dodging them.Players are considered 'hit' if a projectile strucks their virtual avatar, a scenario that they experience both through visual and haptic feedback.Regarding haptic feedback, if players cannot dodge or smash the balls with the paddles, the balls will strike their upper body.A haptic vest will then be used to provide haptic signals to the upper body at the point of impact.In addition, haptic feedback is also activated in the controllers when players successfully block a projectile using the flat paddles attached to the controllers in the VR space as illustrated in Figure 1.In this way, haptic feedback plays an integral role in the game, indicating whether the players were hit or had successfully blocked a projectile.This creates a perceptual link between the visual movement of the projectiles and the sensed vibrotactile feedback.Although not essential for the gameplay (players can rely solely on visual cues) the absence of haptic feedback would reduce the gaming experience significantly.
To explore the subjective perception of asynchrony between visual and tactile feedback, we limit the scenario to virtual objects that move towards the avatar of a first-person player.The path of this motion is always linear, and the player should be easily able to predict any form of tactile interaction from the visual information.We then introduce asynchrony by rendering tactile signals before or after they have interacted with the avatar based on the visual information.In other words, we introduce asynchrony between the visual and haptic information by modifying the sending time of the haptic feedback signal on the software side.Here, we consider both positive and negative latency.Positive latency indicates that the haptic information is delayed.For example, to assess the asynchrony of 100ms of positive latency between the visual and haptic information, we send it 100ms later than when it was supposed to be sent at the time of impact.Negative latency indicates that the haptic information arrives before the visual information.Then, we utilize a simple prediction algorithm for the projectile's trajectory for negative latency.For example, to assess the asynchrony of 100ms of negative latency between the visual and haptic information, we send the predicted haptic information 100ms earlier than when it was actually supposed to be sent at the time of impact.For our analysis, the asynchrony range for the tactile information can be up to 100ms earlier or up to 200ms later with respect to the visual information.

Implementation
The gameplay involves projectiles spawning and moving towards the player, following a random selection of pre-determined patterns.Such patterns are manually created using a simple level editor that we develop.Patterns can vary in length, with a "level" comprising a set of such patterns.We distinguish patterns for the "easy" and "hard" conditions, where the "easy" conditions are implemented as sparse streams of balls that arrive at the player in small groups, allowing them to hit or dodge all of them.The "hard" conditions are implemented as a sequence of dense sets of blocks reaching the player at the same time, which makes it impossible for a player to dodge all of them.
During gameplay, a pattern is randomly chosen from the appropriate set, and projectiles are spawned accordingly.Once a pattern is completed, a new one is selected from the set.This semi-random spawning approach aims to improve player engagement, as predetermined patterns allow for creating more intriguing challenges compared to pure randomness.Ultimately, the main gaming objective is to either dodge or block these incoming projectiles.
We introduce an adjustable delay to the tactile signal based on a variable to mimic real-world latency.This means that the tactile feedback occurs with a delay relative to the visual indication of being hit.We implement this using a simple queue system where tactile events, rather than occurring immediately, are inserted into the queue to be activated later.
We also explore the concept of "negative" delay, where tactile feedback precedes the corresponding visual event.This implementation is more complex, as it involves predicting future events.We apply a linear trajectory prediction to the projectiles moving in a constant, straight trajectory.We trigger the haptic feedback by using a ray-cast to determine if the player or paddle intersects the projectile's path.However, this method has several limitations, particularly when players move frequently, making it less reliable for accurately simulating latency.

PERFORMANCE EVALUATION
We divide the performance evaluation section into six parts.First, we present the ethical considerations in conducting the study.Then, Q1: "On a scale of 1 to 5, how good would you say your experience was?" Q2: "On a scale of 1 to 5, how strongly did you notice the asynchrony between the visuals and the haptics?" • 5 -Noticed a great amount Q3: "On a scale of 1 to 5, did you feel the impact much earlier or later than expected (where 1 is much earlier, and 5 is much later)?"we provide a summary of the most important statistical and technical aspects of the user study.Next, we leverage the collected data to measure whether there is a statistically significant difference between the responses of the participants across the scenarios under test.Then, we present the results of the user study and provide a statistical summary of the participants' responses.After that, we use Simple Linear Regression (SLR) to predict the user experience across varying delay values.Finally, we discuss the results.

Ethical considerations
The study was designed according to the code established by the National Ethical Committee for the Natural Science and Technology (NENT) [9].The data collection was fully anonymous and therefore exempt from registration 1 .
Besides the age and gender, we did not record any other personal information that would permit the identification of participants; they could decline to provide this information.We did not record any audiovisual or biometric information or any other objective data.The study used a questionnaire, and only verbal answers were recorded.All participants were adults who were informed about the goal and the scope of the study.The tests were designed in such a way so that participants were free to exit the VR world at any time, even though all of them completed the study with success).

Conducting the user study
Ahead of the user study, we conducted a pre-study with our lab members.This pre-study was used to determine the asynchrony levels that yield a noticeable difference in terms of noticeability.It also served to find a test duration that would allow participants to become immersed in the task.Participants in the pre-study did not participate in the study itself.
For the actual user study, we recruited a total number of 20 participants, i.e., 13 male and 7 female, with an average age of 24.They were recruited in a public space at the University of Oslo that is frequented by both students and staff.After being taught how to play the dodge game, each participant became accustomed to using the controls of our game.Upon wearing the VR equipment comfortably and entering the VR environment, participants underwent a brief calibration phase to normalize individual differences, such as height.The calibration phase was followed by entering a "tutorial room, " particularly designed to familiarize the participant with the VR environment, the equipment, and the game's objectives.Such a preparatory phase was crucial before starting the actual tests.
After calibration, the participants experienced a reference condition without any asynchrony and were asked to report their experience.After this, they played the game with every asynchrony value at easy and hard difficulty (see below).The settings were randomly permutated for every participant.
Each participant was then asked to play in a set of tests, each representing a unique combination of asynchrony levels and pattern sets.We utilized seven different asynchrony levels (i.e., −100ms, −50ms, 0ms, 50ms, 100ms, 150ms, and 200ms) and two distinct pattern sets (easy and hard).Our procedure required each participant to complete 14 tests together with two baseline tests at the start.The order of the tests was randomized, except that the baseline tests were always conducted first.The entire process averaged around 10 minutes and the players stayed in the virtual world during this period.
Each condition lasted for 30 seconds and was immediately followed by a set of verbal questions posed by the test conductor.The questions were asked while the participant remained in the VR environment to maintain the lived experience.The questions Q1-3 are shown in Table 1.

Statistical Significance Tests
For the statistical significance analysis, we calculate the p-values by leveraging the non-parametric Friedman test [12].Compared to ANOVA, Friedman's test is more robust to differences in individual ratings.Being based on ranks rather than specific values, it is more robust to collect data from untrained participants who may interpret the responses from Table 1 differently.
We explore this separately for the positive (Table 2) and negative (Table 3) latency data.We make the following key observations.First, Table 2 shows that there is a statistical significance difference for Q2 and Q3 (easy difficulty) and Q2 (test difficulty) at a p-value of < 0.001, while for the rest of the questions, the p-values are < 0.12.On the other hand, Table 3 shows similar results for the easy difficulty tests (i.e., p-values < 0.12).However, no statistically significant difference is observed for the hard difficulty tests.

Statistical Analysis
Figure 2 shows the mean and standard deviation of the participants's satisfaction score (Q1).The two categories are shown side-by-side, with the results for the "easy" scenario depicted in blue and for the "hard" scenario in red.The significance is indicated as ++ about 99%, as + above 90% and as -if no significance could be found.response of a participant.As expected, we observe that the experience of the participants is at its peak when asynchrony is zero and degrades when haptic information is either early (negative values) or late (positive values) with respect to the visual information.
From a player's point of view, and due to the preference of the visual sense in interacting with virtual worlds, we can describe negative asynchrony also as the situation where a player feels the hit of a ball on their body before they expect the ball to reach them based on the visual information.
Figure 2 illustrates that the participants' satisfaction decays with approximately the same rate for positive and negative asynchrony for the "easy" scenarios.The satisfaction decays generally slower for the "hard" scenarios, and there is a marked asymmetry that shows less decay of the satisfaction when the haptic information comes earlier than the visual information than when it arrives late.
In general, most participants expressed that they considered their experience to be quite good, even under the extremes of asynchrony.We find a statistically relevant reduction in subjective experience as the delay increased, but on average, not enough to lower most participants' experience significantly below good, even with 200 ms delay.There was also a slight reduction for negative delays, but it was reported to affect their experience less.
Figure 3 shows the mean and standard deviation of the participants' ability to notice the asynchrony.The majority of the participants did not report any (or reported a reduced) asynchrony when the actual delay value was set at 0. As soon as the asynchrony was increased (or decreased), respectively, they began to report some issues.For early haptic rendering (negative asynchrony) and late haptic rendering (positive asynchrony) of 50 ms, most participants expressed that they did not notice any asynchrony, but more of them noticed it for values exceeding this threshold.Most (but not all) participants were able to tell 200 ms delay quite clearly.Among the two difficulty scenarios, the results do not vary significantly.An interesting result to report, however, is that a limited number of participants had very little sensitivity to asynchrony, answering that they felt no noticeable delay even at 200 ms.In particular, a single participant responded that they noticed no asynchrony at a 200 ms delay, and 5 participants responded that they noticed none or only a little asynchrony at 200 ms.
The question of whether users perceived the impact of balls as coming early or late (Q3) is shown in Figure 4.The results are less consistent than for the other questions.For both negative and small positive delays, participants struggled with accurately pinpointing whether the delay came early or late, with the negative delays bringing the least certainty.However, for the large positive delays, participants were generally able to tell that impacts were felt late (although not always).Only a single participant perceived impacts that were rendered with 200 ms delay as coming a little early.

Simple Linear Regression
We adopt an SLR fit for predicting a participant's response as the delay value increases or decreases, respectively.Figure 5 shows all regression models for all three questions and both easy and hard scenarios.We consider the positive and negative delay values separately because of their distinct implementations described in Section 3.3.
In addition, Table 4 provides additional insights on the regression results shown in Figure 5.In particular, we report the intercept and slope values, respectively, for each of the regression lines (including all difficulty scenarios and both positive and negative delays).
Considering that the data for Q1 and Q2 are statistically significant (section 5.1), we performed a linear regression analysis to extract the relation between the asynchrony level and participants' responses.In particular, we exclude the reference condition and take an average between the hard and easy difficulty scenarios, as these are categorical values and thus cannot be given ordered numerical correlations on a numbers line.
We further performed the SLR on negative and positive delay data separately for the effect of asynchrony on the responses.We make this distinction because negative and positive delay values use two different systems of delay and because there is no obvious ordering between negative and positive delays.This SLR fit shows that there is a statistical correlation between higher asynchrony and a lessened user experience.However, in this specific instance, it is perhaps less than what one would expect, where even a massive 200ms delay signifies only a slight reduction in user experience.In addition, in this specific experiment, participants appeared to be more tolerant of negative delay than positive delay.While the inherent system delay may result in some delay that we did not account for, Figures 2 and 3 show weaker trends for negative than positive asynchrony.
As for noticeability, we see that there is a fairly strong correlation between asynchrony and a participant's ability to notice it.Clearly, positive delays are easier to notice than negative delays.We also see that there is a stronger correlation here than in the subjective  experience, suggesting that the participants may notice the delay, while their overall experience is not affected by it.Finally, we also performed an SLR on whether the response to how strongly the participant noticed the asynchrony has a correlation to the response given for the participant's subjective experience.We observe that there is a very strong correlation between how much a participant noticed the asynchrony and their experience, with a user being more likely to say that their experience was okay or bad when they also said that they strongly noticed the asynchrony.
It is worth mentioning that most participants only noticed the asynchrony to a moderate degree at its most noticeable, suggesting that those more sensitive to delays found it more significantly affected their subjective experience.

Discussion
The analysis allows us to revisit our research questions.RQ1 was concerned with the impact of asynchrony between visual and tactile information in VR environments.We observe that there can be a considerable variance between participants in terms of how well they tolerate asynchrony.Some participants were much more sensitive than others and, thus, were able to tell the delay nearly precisely.On the other hand, a significant fraction of participants were very insensitive, thus struggling to notice delays and reporting their experience to be equally great, even with a large amount of asynchrony.On average, there seems to be a mostly linear trend of user experience getting worse with increased asynchrony in both directions.However, even in the worst case scenario, i.e., 200ms, most participants considered the experience to still be acceptable.Even for small amounts of delay, i.e., 50ms, a large portion of participants (around 35%) never reported any noticeable amount of asynchrony at all, causing only a slight reduction in user experience for those who did.An asynchrony of 50ms does not impose strong limits on networking, thus allowing for interaction between many European regions.
Asynchronous visual and haptic information in the scenario we investigated does not seem highly critical.Although a 50ms asynchrony implies a noticeable latency, it was not the case for everybody, and many who did sense a delay were not sure about it.It was only when the delay reached 100ms that the majority of participants began to report a significant amount of delay.
RQ2 asked about the user experience when tactile information is rendered ahead of behind visual information.We found that participants are more tolerant of negative than positive asynchrony.However, it is difficult to draw any definitive conclusions on that aspect, as the negative asynchrony was (necessarily) implemented through prediction, while positive asynchrony was implemented by inserting delay in the game.In addition, sending a haptic command to the TactSuit itself involves a slight delay, which provides a linear shift in the results towards the positive asynchrony case.

LIMITATIONS
This paper initiates a deeper look into how we perceive haptic feedback related to body movement.Unlike other research focusing on actions that trigger haptic feedback, our study examines actions that fail to avoid haptic feedback.As a preliminary investigation, it only covers a small part of the broader question.We acknowledge our study's limitations and outline potential solutions to overcome them in future research.
Figure 5 shows an asymmetry, which shows that user satisfaction peaks at zero latency, while user noticeability is lower when haptic feedback is predicted up to 50 ms.The meaning of this skew is uncertain for two reasons: (a) we have so far not managed to measure the cumulative latency of the vest (vibrotactile actuators), its communication channel (Bluetooth), software latency, and the VR hand-and headset (USB); (b) we have used the headset position to estimate the vest (chest) position.We intend to measure the cumulative latency using a pair of acceleration sensors and track the absolute vest position with an HTC Vive Tracker.
The next limitation is the number of users participating in our user study (i.e., 20 participants).On the one hand, the diversity of user profiles was adequate to produce a rich dataset that offers variability and can be used to capture patterns and trends towards deriving interesting insights.On the other hand, however, the limited scale prevents us from confidently generalizing our findings.

CONCLUSION
In this paper, we conduct a user study to investigate the importance of asynchrony between visual and haptic information.By either introducing delay or adding prediction to the haptic information in a simple first-person dodging game, we study how levels of asynchrony are perceived by human beings.We found that a difference of 50 ms in either direction creates statistically significant differences in participants' ability to notice the asynchrony as well as in their satisfaction.However, for the specific case of our dodging game, satisfaction rarely drops below an average value ("Okay" as reported in the paper) for asynchrony up to 100 ms.
We also found an asymmetry in our results for both satisfaction and noticeability that indicates a slower decay of satisfaction and an increase of noticeability when haptic information is rendered before the visual information.
The study in this paper was motivated by the observation that there will be a discrepancy between visual and haptic feedback in an upcoming system for first-person remote interaction.If the different modalities are too disjointed, they must be integrated artificially, probably through prediction mechanisms.To understand when this error-prone step is actually necessary, we must first find the limits of acceptable asynchrony.This paper makes a step in this direction.
As for future work, we plan to expand our work in several directions.First of all, our main motivation for our current work was to show trends in terms of latency asynchrony, but we acknowledge that it is hard to determine the exact latency points due to a lack of understanding of the latency inherent in the different components of the system.Nevertheless, we plan to carry out additional user studies where we focus on having a more granular analysis in the -100 to +100ms regions, as well as identifying the potential sources of the latency in the system.Furthermore, we plan to expand our analysis to cover a large, more diverse user group.Our overarching goal with this work is to understand perceptual thresholds that humans can tolerate for networked interaction.In such networked interactions, latency plays a significant role in determining responsiveness and acknowledging and proactively managing latency through predictive measures and compensatory strategies is fundamental for the success of interactive applications, ensuring a more immersive and responsive user experience.In this paper, as a starting point, we only considered the virtual environment where it is easier to compensate or predict latency more easily.Our goal in the near future is to take this from the virtual world to the physical world, where we can enable various types of remote operations through robotics and haptics.
Due to our limited understanding of the connection between the moving objects in the virtual world and participants' perception of haptic signals, this study is also limited to two sets of patterns, 'easy' and 'hard.' The results illustrated in Figures 4 and 5 differ for the two cases, but with extended studies, we will a wide variety of patterns and explore the continuum between the current "easy" and "hard" settings.Another ambition is to explore users' exerted force, for example, by using pressure sensors.The haptic feedback in our design was constrained to discreet cues, excluding the interaction dynamics related to intensity or effort.Finally, we want to improve how we mimic negative asynchrony in software.This is currently based on linear trajectories of the balls and ignore human motion.In the future, we aspire to predict also the player's motion for a more accurate prediction.

Figure 1 :
Figure 1: Screenshot of the spectator view.The left subfigure shows the location of the haptic vest's vibration motors under the headset.The right subfigure shows the player's first-person view in VR with an approaching "wall" of balls.

Figure 2 :
Figure 2: Mean and standard deviation of the participants' satisfaction score (Q1) versus asynchrony, grouped per difficulty level (blue: easy, red: hard).Background point represent the actual answers by each participant.

Figure 3 :
Figure 3: Mean and standard deviation of the participants' noticeability scores (Q2) versus asynchrony, grouped per difficulty level (blue: easy, red: hard).Background point represent the actual answers by each participant.

Figure 4 :
Figure 4: Mean and standard deviation of the participants' asynchrony strength scores (Q3) versus delay value, grouped per difficulty level.Background point represent the actual answers by each participant.

Figure 5 :
Figure 5: Simple Linear Regression results.A figure (and a regression fit) is provided for each question and dissected per difficulty scenario.The confidence levels for each fit are illustrated in grey.

Table 1 :
List of questions asked after every condition.

Table 2 :
Each scatter point represents the actual Friedman test results on late haptic rendering data and grouped per difficulty level.The significance is indicated as ++ about 99%, as + above 90% and as -if no significance could be found.

Table 3 :
Friedman test results on early haptic rendering and grouped per difficulty level (using prediction).

Table 4 :
Regression statistics.Intercept and slope values for each regression line illustrated in Figure5.