WatchCap: Improving Scanning Efficiency in People with Low Vision through Compensatory Head Movement Stimulation

Individuals with low vision (LV) frequently face challenges in scanning performance, which in turn complicates daily activities requiring visual recognition. Although those with PVL can theoretically compensate for these scanning deficiencies through the use of active head movements, few practical applications have sought to capitalize on this potential, especially during visual recognition tasks. In this paper, we present WatchCap, a novel device that leverages the hanger reflex phenomenon to naturally elicit head movements through stimulation feedback. Our user studies, conducted with both sighted individuals in a simulated environment and people with glaucoma-related PVL, demonstrated that WatchCap's scanning-contingent stimulation enhances visual exploration. This improvement is evidenced by the fixation and saccade-related features and positive feedback from participants, which did not cause discomfort to the users. This study highlights the promise of facilitating head movements to aid those with LVs in visual recognition tasks. Critically, since WatchCap functions independently of predefined or task-specific cues, it has a wide scope of applicability, even in ambient task situations. This independence positions WatchCap to complement existing tools aimed at detailed visual information acquisition, allowing integration with existing tools and facilitating a comprehensive approach to assisting individuals with LV.

Individuals with low vision (LV) frequently face challenges in scanning performance, which in turn complicates daily activities requiring visual recognition.Although those with PVL can theoretically compensate for these scanning deficiencies through the use of active head movements, few practical applications have sought to capitalize on this potential, especially during visual recognition tasks.In this paper, we present WatchCap, a novel device that leverages the hanger reflex phenomenon to naturally elicit head movements through stimulation feedback.Our user studies, conducted with both sighted individuals in a simulated environment and people with glaucoma-related PVL, demonstrated that WatchCap's scanning-contingent

INTRODUCTION
Scanning is an essential function for visual recognition in daily life, enabling us to collect information from our surrounding environment and understand the situational contexts.The information gathered through scanning plays a critical role in the perception of spatial orientation and the construction of global representations, thereby facilitating the execution of daily activities [42,47].The ability to scan effectively relies on good peripheral vision and the management of gaze movements associated with it [9,14,52,56].However, eye diseases such as age-related macular degeneration and glaucoma can cause damage to the visual field, negatively impacting the scanning abilities of people with LV.Those with LV experience a restricted field of view (FoV), which limits their interactions with visual stimuli, ultimately diminishing the efficiency of scanning [52,75].Consequently, individuals with LV may encounter difficulties in collecting information and remaining aware of their surroundings, which can make daily activities more challenging [8,22,50,51,67,68,71].
To mitigate the challenges faced by LV individuals, most of the previous research has directly conveyed essential environmental cues such as information required for wayfinding [31,46,58,78,80] or obstacle avoidance [7,32,33,63,79] to users through auditory [3,7,24,31,33,45,58], haptic [11,26,46,63,78], and multimodal [80,83] feedback.While cue-based assistance effectively delivers designated information to people with LVs, its utility might be confined to the specific tasks for which it was designed.As feedback cues need to be meticulously designed, considering the anticipated conditions and situations, the predetermined design may not be flexible enough to accommodate unforeseen situations or changes in real-time.Additionally, a potential increase in cognitive demand needs careful consideration to ensure the overall effectiveness and usability of cue-based assistance tools.
Individuals with LV often have residual vision that, while limited, can still be harnessed effectively for daily tasks.This use of residual vision has been reported to be a preferred method for the visual recognition of people with LV [72,84].Consequently, several approaches have been proposed to facilitate easier access to crucial information for daily tasks by maximizing the utilization of the residual vision [29,60,81,83].One prominent approach is the augmented reality (AR) cue technique using head-mounted displays (HMDs), which offers visual cue feedback to simplify the scanning process and provide easier comprehension information [51,76].While this method has enabled LV individuals to enhance essential abilities for task completion, relying solely on their residual vision, visual stimuli modified by visual cues may induce an unnatural experience and increase cognitive demands in return.
However, it is noteworthy that some LV individuals with PVL can efficiently perform tasks requiring visual recognition by actively leveraging compensatory behaviors.Individuals with compromised vision can use the scanning process to accomplish various tasks by incorporating head movement.This has been observed in dynamic scenes such as driving [6,30,39].Patients who actively used head movement in situations demanding scanning abilities demonstrated improved performance.The gaze movements of LV individuals with PVL, affected by a limited FoV, undergo adaptive changes when accompanied by head movement, facilitating more effective scanning [13,15,30].This adaptation influences gaze behavior and improves performance in tasks requiring scanning abilities.Encouraging LV individuals with PVL to incorporate head movements while scanning their environment can potentially compensate for their diminished scanning capabilities.However, efforts to use the compensatory effects of head movements to enhance the scanning abilities of individuals with PVL have not been explored.Although this phenomenon has not yet been incorporated into aid for people with PVL, facilitating it without necessitating purposive, intentional head movement could support their scanning behavior in daily activities, without adding to the cognitive load.

The Present Study
To enhance the scanning ability of people with LV while ensuring compatibility with existing methods, we developed and tested a system designed to facilitate head movement during scanning naturally.The contributions of this study are summarized as follows: • We developed and validated a real-time scanning identification algorithm to provide gaze state-contingent feedback that supports scanning without interfering with verification.• We utilized the illusory shear force and vibration-induced hanger reflex phenomenon to design a device, WatchCap, that facilitates wider gaze movement.• We evaluated the impact of this unconscious scanning facilitation on visual exploration tasks through saccade-related metrics and user feedback involving both simulated environments and actual low-vision participants.

RELATED WORK 2.1 Peripheral Vision Loss (PVL)
2.1.1Peripheral Vision.Human peripheral vision, encompassing a visual angle of 2.5 to 70 degrees or farther, is responsible for collecting information outside the central field of view and recognizing situational contexts.This approach is essential for understanding spatial orientation and building global representations in scanning activities.Even though the density of optical nerves in peripheral vision is lower than that in central vision (visual angle; 0-2.5 deg), causing a reduction in visual perception capabilities, the broader visual field in peripheral vision makes it suitable for interacting with a wider area of visual stimuli and gathering information [13,42,47].Furthermore, information collected through peripheral vision serves as a basis for planning the next gaze movement, leading to efficient scanning [52,57].Thus, good peripheral vision is indispensable for scanning ability.However, people with PVL, often a symptom of eye diseases such as glaucoma and diabetic retinopathy, experience difficulties in recognizing surrounding conditions through scanning.This form of vision impairment critically influences individuals' daily life, particularly in tasks where environments change dynamically, such as driving or walking in traffic [22,50].Moreover, their challenges are not limited to dynamic situations; their performance in static tasks such as visual search and reading is also considerably lower [8,67,68].This consistent struggle in daily tasks due to reduced peripheral vision and scanning ability highlights the urgent need for assistive solutions.Consequently, aids and traing methods have been proposed to facilitate broader visual exploration and assist in the daily activites of individuals with PVL [28,60].Enhancing the quality of life for individuals with PVL necessitates efforts to develop and implement strategies that effectively address the challenges they face.

Aids for Individuals with Low Vision.
People with LV often struggle to carry out daily activities, requiring a feedback system that can replace or supplement scanning.Several prior works have attempted to offset this reduced scanning ability by employing sensors to gather necessary information for a particular task and relay this information back to the user.These methods span a range of tasks, such as wayfinding [3,24,31,46,58,78,80,83] and obstacle avoidance [7,26,32,33,63,79].Additionally, it utilizes diverse feedback modalities, including audio [3,7,24,31,33,45,58], haptic [11,26,32,46,63,78,79], and multimodal [80,83] approaches.For example, Xu et al. [80] designed the virtual paving method to aid people with LVs in navigating complex environments via sensors to detect obstacles and changes in terrain.The system provided audio and haptic feedback to allow users to understand and react quickly to the environment.
Recent research has highlighted the potential of leveraging LVs' residual vision to enhance their daily activity performance, as individuals with LVs show a pronounced preference for utilizing this technique for routine tasks [71,72,84].Despite the inherent challenges in scanning ability among those with LVs, visual cues have been introduced to facilitate easier access to critical information needed for tasks such as wayfinding [29,83], obstacle avoidance [51,60], and object finding [60,76,85].Zhao et al. [83] investigated how to provide guidance for wayfinding by comparing visual and audio feedback types.While both feedback mechanisms resulted in similar task performances, the visual cue type of guidance produced fewer errors during task execution and demanded less cognitive effort.It also assists users by allowing them to focus more intensely on tasks and ensuring the safety of their execution.Furthermore, LightGuide [81], which uses light perception for cues, outperformed haptic feedback guidance in path-following tasks, resulting in faster, more efficient, and more accurate assistance.Participants also replied that the visual cue type of feedback was more intuitive and user friendly.

Gaze Behavior of People with Sighted Vision.
Human gaze movement provides insight into interest, concentration, and task performance.A nuanced analysis requires the segmentation of gaze movement into two primary actions: fixation and saccade.Fixation is defined as the behavior where the gaze remains in a specific area.By assessing measures such as duration and dispersion of fixation, one can identify the object of focus and the level of concentration.Saccade, on the other hand, is characterized by rapid movement of the gaze from one fixation to another.The duration of fixation and amplitude of saccades allow us to discern between the scanning or verification phases of gaze movement.Furthermore, continuous straight relative-direction saccades, sometimes termed forward or strong momentum saccades, also denote gaze movement shifting to new areas, signifying a scanning phase [69,70].
By analyzing both fixations and saccades, one can determine between whether an individual is either scanning their surroundings or collecting concentrated information through verification [48,74].A gaze movement that displays a short fixation duration and long saccade amplitude is indicative of a scanning phase [13].Suppose that this scanning phase gaze behavior also exhibits a shorter fixation duration combined with increased saccade amplitude and momentum.In that case, it may be inferred that scanning is being performed efficiently [13,15].Therefore, the metrics of fixation duration, saccade amplitude, and saccade momentum can serve two purposes: 1) identifying the scanning phase and 2) evaluating the efficacy of both scanning and verification processes.

Gaze
Behavior of People with PVL.People with PVL display a unique gaze behaviors primarily influenced by their limited FoV.In the case of saccade behavior in people with LVs, a constricted FoV restricts the saccade range, often resulting in shorter amplitudes [5,43,53].Additionally, due to the relative scarcity of information collected from visual stimuli, their saccades tend to progress in a forward direction with strong momentum [5].For fixation movement of those with LVs, shorter fixation durations are observed for the same reason [12].These observations suggest that interventions aimed at increasing saccade amplitude might be essential for enhancing scanning ability.Thus, the specific challenges faced by people with PVL necessitate targeted strategies that cater to their unique gaze behavior patterns.

Compensate Behavior
2.3.1 Role of Head Movement in Gaze Behavior.The introduction of embedded eye-tracking capabilities into HMD has enabled the stable and robust measurement of natural gaze data without strict constraints on head or torso movement.Gaze experiments with this newfound flexibility have revealed that head movement plays a unique role in human gaze movement [13,15,18,27].Primarily, when head movement accompanies gaze saccades, the amplitude of saccades is approximately twice as long that than in scenarios with head restrictions, indicating a synergistic effect [15].This is not merely a result of the combined impact of head and eye movement; even excluding the influence of the added head rotation and eye movement that increased amplitude, the head movement accompanying the saccade step enabled the gaze to land in new visual stimuli areas that were previously unobservable.Additionally, gaze saccades' momentum becomes stronger when paired with head movement [30].Head rotation, although short in amplitude, possesses strong momentum, continuously moving in one direction and influencing gaze behavior.Considering that head movement increases both the amplitude and momentum of gaze saccades, which are key indicators of scanning performance, head movement emerges as a crucial factor in optimizing scanning ability, especially for recognizing peripheral environments.

Head
Movement for Individuals with PVL.For individuals with PVL, the FoV of their residual vision not only is an indicator of their visual ability level but also acts as a criterion for assessing their ability to participate in activities that require varied vision proficiencies.Social barriers often emerge for such individuals, ranging from restrictions on obtaining a driving license due to challenges in responding to dynamic environmental changes to prospective employment hurdles due to misperceptions about their productivity [17,23,30].Interestingly, a segment of individuals with PVL has demonstrated the ability to safely perform driving tasks [6,30,39].This is attributable to their proactive use of head movements as a compensatory measure for their disadvantaged gaze behavior during such activities.The beneficial impacts of head rotation on gaze behavior, such as amplifying gaze amplitude or enhancing gaze momentum, have been evident in these individuals [13,30].Therefore, by actively employing head movements, they can modify their gaze movement and potentially enhance their performance in tasks that rely on visual processing.Based on the anticipanted benefits, systematic efforts to leverage head movements as a strategy for enhancing scanning ability in people with PVL hold promise.

Rotation Perception Force.
To compensate for gaze movement during scanning, two distinct approaches can be used to guide a user's head rotation according to the proper timing and direction.The first is based on electrical muscle stimulation, which directly induces head actuation [73].This method is suited for manipulating the final head orientation angle since it relies on muscle contraction, rather than influencing the rotation process itself.
The second approach revolves around the hanger reflex.The Hanger reflex is a phenomenon in which a rotational force acts upon the head when pressure is applied to the mastoid process [66].Although the exact mechanism is not fully understood, it is believed that the reflex arises due to a shearing force on the skin [4].This reflex can be triggered by directional skin stretching and vibration.Numerous tests on body parts, such as the wrist [4,55], legs [65], waist [34], and head [34,66], have revealed that this skin stimulation produce an illusory force, prompting the rotation of the specific body part.One notable advantage of the hanger reflex is its unconscious integration.It can rapidly generate a significant illusory force that prompts rotation with simple implementation.Additionally, using double-hanger reflex devices [54], one can modulate the direction of head rotation by activating or deactivating the vibration mechanism.Building on these considerations, our research employed the double-hanger reflex method, providing stimulation feedback to encourage natural head rotation during the scanning phase of LV individuals' visual perception processes.
Furthermore, the application of hanger reflex is not limited to merely stimulating body part rotation; it can also be used to naturally modify a series of movements, such as walking [20,35].Fukui et al. [20] have demonstrated that inducing the hanger reflex on a user's knee can bias their walking direction.This could naturally alter the user's walking path.Based on these findings, we hypothesized that utilizing the hanger reflex could guide head movements, thereby naturally altering the gaze movements of individuals with LV.

IMPLEMENTATION
Our system implementation is twofold: 1) the design of a real-time scanning identification algorithm characterized by continuous forward saccades before the onset of the fixation for verification, and 2) the integration of identification into stimulation device to facilitate head movement during scanning to compensate for limits in gaze movement.Specifically, this system operates in a sequence of steps: (1) The orientation of the eye (eye-in-head) and the head (head-in-world) were collected in real time to compute the direction of gaze (eye-in-world).(2) A fixation identification algorithm is used to discern whether the current gaze state is a fixation or a saccade.
(3) Finally, a scanning identification algorithm is utilized to determine the optimal timing to convey stimulation feedback to the user.This determination is based on parameters of both fixation and saccade.
A major focus was on the early identification of scanning behavior since stimulation feedback at improper timing could obstruct the scanning and verification processes, deteriorating the user experience.For instance, inducing head rotation during the verification phase might interrupt a user's engagement in detailed observation of visual stimuli.
Additionally, to provide head movement stimulation with this software to detect the start of the user's scanning behavior, we designed a wearable stimulation device named WatchCap.Unike a conventional cap, WatchCap harnesses the double-hanger reflex effect.Head rotation is induced by applying balanced shear forces in both directions and intensifying the hanger reflex effect with vibration during the scanning phase.

Eye Tracking Data Preparation
We used the Meta Quest Pro [16] to track eye (eye-in-head) and head (head-in-world) movements in quaternion form.Quaternion multiplication between the two orientations provided gaze orientation data (eye-in-world).Multiplying the orientation data of the eye, gaze, and head with the forward vector generated their respective directions.These directions represent positions projected onto a unit sphere from the user's perspective.Alongside their respective timestamps, the eye, gaze, and head directions were constantly recorded at a refresh rate of 60 Hz.
To process the collected gaze direction data, we employed a fixation identification algorithm.This process determined whether the current gaze movement state was either a fixation or a saccade.To meet the demands of a real-time VR environment, we used the velocity and dispersion threshold identification (I-VDT) algorithm [44].This algorithm classifies the gaze state based on velocity, dispersion, and duration thresholds.The classified states were then recorded alongside the direction data.
We further computed metrics for gaze behavior from the recorded timestamps and directions of fixation and saccade.The fixation duration was determined by comparing the starting and ending timestamps of a fixation.The saccade amplitude was calculated by computing the difference in vectors between the gaze and head directions at the onset and termination of a saccade.The magnitude of these values were then recorded as the gaze saccade amplitude and head saccade amplitude.To discern the relative direction of saccades, or saccade return angle (SRA), we calculated the dot product between vectors of the completed saccade and the preceding saccade.As an indicator of saccade momentum, we employed the saccade reversal rate (SRR) [5].SRR denotes the fraction of backward saccades, falling within a range of 170 to 190 degrees, observed during a trial.Then, the momentum of saccades was evaluated using the both SRA and SRR metric.A high SRA and SRR value indicates weaker saccade momentum, whereas a low SRR denotes stronger saccade momentum.The fixation duration, saccade amplitude, and SRA were available in real time, the SRR could only be computed post-trial.A real-time identification algorithm distinguishing the current user phase as scanning is essential for the effective delivery of stimulation feedback during a user's scanning behavior.Existing methods identify the scanning phase only post-hoc after the complete collection of gaze data [25,37].This retrospective approach does not align with our study's real-time requirements.Hence, our proposed methods determine the beginning of scanning behavior by recognizing a few preliminary saccades based on observed heuristic gaze movements during the scanning phase.

Scanning Identification
Initially, we designed a scanning identification algorithm utilizing the saccade amplitude as a threshold.As highlighted in previous research analyzing gaze data from visual search tasks [13], the scanning phase consistently if Rot_Eye then Scanning Phase: Providing stimulation feedback 7: end if else if Rot_Gaze then 9: Providing Stimulation Feedback ì    ← ì    16: end while demonstrated a significantly extended saccade amplitude compared to the verification phase.This observation holds true across various conditions of visual field loss and irrespective of the data's reference type (e.g., eye, head, or gaze).Hence, our algorithm identifies the scanning phase whenever the saccade rotation (amplitude) exceeds a specific amplitude threshold (e.g.,   _ >   ℎℎ or   _ >   ℎℎ ).This method is referred to as the Rot algorithm (refer to Figure 2 (B), (C) and Algorithm 1).We further subdivided the chosen direction data into Rot_Eye and Rot_Gaze.We chose not to utilize head direction data, as the slow pace and short amplitude of head movement present difficulties in precise identification.
Subsequently, we designed an algorithm focusing on the direction of fixation postsaccade for scanning identification.This approach determines the phase as scanning if the gaze direction results in a fixation that extends beyond the central visual field and its adjacent areas.Typically, verification relies predominantly on central vision, positioning the gaze direction centrally within the visual field.However, after verification, the scanning phase often diverges from this central region toward the periphery.Thus, we developed another scanning identification algorithm termed the Area algorithm that discerning these situations.An area radiating from the visual field's center up to a visual angle threshold defines the central region (refer to Figure 2 (D), (E), and Algorithm 2).The scanning phase is identified when the saccade concludes outside this demarcated area (e.g.,  _ >    ℎℎ or  _ >    ℎℎ ).Similarly, based on the selected direction data, this was categorized further into Area_Eye and Area_Gaze.However, because concurrent head rotation causes the saccade amplitude (gaze) to be roughly double that of the saccade amplitude (eye) [15], we adjusted both the amplitude and the visual angle thresholds to twofold for the gaze direction data.
However, when utilizing eye (eye-in-head) data, incorrect identification due to the vestibulo-ocular reflex (VOR) may occur.VOR is a reflexive movement that stabilizes gaze (eye-in-world), allowing gaze direction to remain stable even when head movement occurs as the eye direction moves oppositely.Therefore, when using eye direction data, the user's gaze may remain fixated while the eye direction changes due to VOR.If such movements are recognized as part of the scanning phase, it could lead to a situation wherein the user wants to keep their gaze fixed, but stimulation forces the gaze to move.However, eye movements caused by VOR do not get recognized as saccades because they have a similar speed to head movements [36,49,59] and thus do not Algorithm 2 Scanning phase identification by Area algorithms (see Figure 2 (D), (E)) if   =  then 7: Scanning Phase: Providing stimulation feedback 8: ← end if

23:
end if 24: end while exceed the velocity threshold of the I-VDT [44].Additionally, scanning identification only recognizes movements as part of the scanning phase if a saccade occurs and surpasses a certain threshold.Therefore, we expected that misidentification due to VOR eye movements would not occur, and consequently we used eye direction data.

Double-Hanger Reflex
To prompt immediate head rotation in alignment with the start of a user's scan, we devised a device that leverages the hanger reflex effect for stimulation feedback.The hanger reflex is a phenomenon wherein rotational force is exerted on the head due to the shear force produced by skin stretching and vibration.
To induce bidirectional rotation force perception, Nakamura et al. [54] utilized the characteristic of the hanger reflex, wherein the perception of rotational force intensifies upon the application of vibration.This method has the advantage of simplicity in the device's form, as it does not use air pumps or tubes as in previous implementations of the hanger reflex using air balloons [20,66].Additionally, it affords the benefit of inducing immediate responses by activating the vibration actuator.
WatchCap utilizes the implementation form [54,64] to induce head rotation in either a clockwise or counterclockwise direction by activating the vibration actuator.Initially, we placed four double hanger reflex vibration modules (see Figure 3 (C)) in each pole of WatchCap (see Figure 3 (B)-1).These modules are constructed by attaching a rigid coin-shaped metal to the vibration actuator.When a user wears WatchCap, the modules press closely against the head, applying pressure to the contacted areas.Pulling the strap at the back of WatchCap (see Figure 3 (B)-2), moves the modules backward and causes skin stretching, resulting in a balanced state of rotational force perception in different directions.This replaces the role of air balloons used in previous studies to generate pressure and skin stretch [20,66].When inducing the illusory rotational force, we disrupt the equilibrium between two opposing illusory forces by activating two vibration actuators in opposite directions at 50 Hz, which has been reported as the most effective frequency for enhancing the hanger reflex [54].
Activation of the modules of WatchCap is achieved via an Arduino module equipped with Wi-Fi communication capabilities, which is attached to the top of the cap.The operation of the vibration actuator involves a total maximum delay of 13 ms.This considers the average computation time of the scanning identification algorithm (0.3 ms), the Wi-Fi communication delay (0.5 ms), and the activation time for the vibration part (12 ms).The power required to operate the Arduino module and the vibration actuator is supplied by an external battery connected to WatchCap.Additionally, we considered the external aspects of the device to ensure that individuals with LV would not hesitate to use it in their daily lives.The modules were attached inside a cap commonly used in everyday life, with only the minimal necessary components exposed externally.In this study, we aimed to enhance the scanning ability of individuals with PVL by utilizing stimulation feedback that induces head rotation.However, since it has not been confirmed how WatchCap affects users as we intended (i.e., altering the gaze movement of participants to be wider).Therefore, before conducting a user study with our target population, individuals with PVL, we decided to investigate the impact of WatchCap's stimulation on individuals with sighted vision.To facilitate this, we employed a LV simulation that replicates the visual field of individuals with PVL for the sighted vision participants.

LV Simulation
LV simulation aim to provide a comprehensive understanding of the experiences and challenges faced by individuals with LV during daily activities, allowing the general population to empathize.Several studies have employed LV simulations to get insights into the experience of individuals with LV [10,38,82] or to evaluate the systems they have proposed [1,2,29].Zhang et al. [82] introduced a tool that simulates LV's visual field loss in both central and peripheral visual conditions and across mild to severe disorders.This allows to assess the visual recognition experiences of people with visual field loss.Further research using transient peripheral masking also provides insights into the gaze behavior of participants experiencing PVL [13,15].Therefore, this study employs LV simulation using transient masking as a method to assess visual processing experience based on visual field loss disorder and it will enable us to gauge the suitability of stimulation feedback.
We simulated the visual conditions of people with PVL using a VR HMD in this research.Visual stimuli that mask the peripheral vision were used to ensure that participants could not utilize their peripheral vision.Depending on the simulated disorder of PVL, the masking size varied (see Figure 4), where the mild symptom simulation provided a visual field of 15 degrees (loss outside of the macular region) and the severe symptom simulation were restricted to 7.5 degrees (loss outside of paracentral vision), horizontally, thereby restricting their observation of the surrounding environment.Furthermore, we dynamically adjusted the peripheral vision masking based on the participant's gaze direction to mirror as closely as possible the actual visual experience of individuals with LV.

STUDY DESIGN
In this study, we aimed to investigate three primary objectives: (1) Evaluate the performance of our identification algorithm in distinguishing the scanning phases.
(2) Observe whether gaze behavior in paticipants with sighted vision actually changed as intended through the stimulation feedback provided by WatchCap (3) Observe the changes in gaze behavior and visual processing experience when WatchCap was used by individuals with PVL.
To achieve the first two objectives, we conducted Study 1 with participants with sighted vision under simulated conditions of PVL.Study 1 served as a foundational step in observing the effects of WatchCap on gaze movement changes of participants with sighted vision during the scanning phase.Subsequently, Study 2 involved participants with PVL, allowing us to verify the practical use of our system, as validated in Study 1, on the visual processing experience of individuals with PVL.
This research is structured around specific research questions that guide our user study, data collection, and analysis.The research questions (RQs) and the corresponding research hypotheses (RHs) are detailed in the subsequent sections of this paper.Our research aims to contribute to the development of more effective aids for LV individuals, enhancing their ability to perform daily tasks that require scanning ability.Visual processing comprises two distinct phases, scanning and verification, each characterized by unique gaze behaviors.As Velichkovsky et al. [77] noted, the scanning phase involves short fixation durations and extended saccade amplitudes.This pattern facilitates a broader coverage of visual stimuli, aiding in the rapid perception of scenes or the development of global representations.In contrast, the verification phase typically involves longer fixations and shorter saccades, focusing on detailed information collection within specific areas of interest.

Research Question
WatchCap aims to facilitate the user's construction of a global scene representation by inducing head movements, thereby potentially enhancing the scanning efficiency for individuals with PVL.However, if stimulation occurs during the verification phase, a period characterized by detailed examination, it could inadvertently interfere with visual processing.Thus, developing an accurate scanning identification algorithm is vital for preventing the misidentification of the verification phase as scanning.
This study introduces a real-time algorithm capable of determining the user's visual processing state (RH 1).We have developed four algorithm variants, each leveraging eye-tracking data indicative of the scanning phase.The discerning performance of these algorithms are measured using a confusion matrix based on the coefficient K, which serves as a key indicator for distinguishing between the scanning and verification phases.Rapid and extensive gaze movements during the scanning phase are essential for the quick construction of scene representations.These movements, particularly forward-moving saccades with strong momentum, are notably more efficient than revisiting previously observed visual stimuli.This efficiency is characterized by indicators such as short fixation duration, large saccade amplitude, and strong saccade momentum, which collectively suggest a user's effective perception of a broader scene.Central to our study's objective, such changes in gaze behavior are essential for enhancing the scanning abilities of individuals with PVL.
In this study, by comparing gaze behaviors before and after receiving feedback, we aimed to assess any expansion in participants' gaze movements (RH 2).Feedback leading to shorter fixation durations (RH 2.1) and an increase in the amplitude and momentum of saccades (RH 2.2 and RH 2.3) would suggest its effectiveness in enabling users to acquire a more comprehensive view of the scene during the scanning phase.
Gaze behaviors are influenced by the severity of visual field loss; more severe conditions limit visual stimuli, leading to briefer fixation durations and saccade amplitudes.This narrow view reduces the scanning efficiency.Nonetheless, visual condition and head movement are separate domains.If head movement proportions differed according to visual conditions, feedback effects on head movements would differ.However, David et al. [15] reported no tie between the PVL extent and head saccade amplitude or momentum.Hence, we hypothesized that WatchCap's feedback, which induces head movement, would uniformly change the gaze of individuals with PVL, independent of the visual condition (RH 2.4).In addressing the RQs previously outlined, we aim to evaluate the algorithms and WatchCap within simulated environments.We extended this evaluation to incorporate individuals with PVL with two primary objectives: gaining insights into individual experiences and investigating the utilization of WatchCap based on each individual's scanning strategy.It is crucial to acknowledge that simulations of PVL may not completely replicate the visual processing experiences of individuals who actually live with this condition.Therefore, we conducted an additional study involving participants with PVL.In this investigation, we not only examined changes in gaze behavior (RH 3.1) but also gathered individual qualitative experiences related to visual processing while using WatchCap (RH 3.2).
People with PVL often develop distinct visual processing strategies tailored to their specific visual conditions.Therefore, the impact of WatchCap on visual processing might vary.For instance, WatchCap may experience a relatively modest effect on individuals who already incorporate active head movement in visual processing.Conversely, individuals who predominantly rely on eye movement might derive greater benefits from our system.Therefore, we aimed to assess the impact of WatchCap on visual processing, taking into account each individual's unique scanning strategy (RH 3.3).1).The study participants were collected through the LV community and local recruitment websites, specifically targeting individuals who could provide documented evidence of a diagnosis of conditions leading to visual field deficits, such as glaucoma.All participants in the study had received a clinical diagnosis of glaucoma from an ophthalmologist and were experiencing symptoms of varying severity.For their participation in the experiment, each participant received compensation at a rate of 30 USD per hour.

Experimental Setup.
To collect gaze data and experience of using WatchCap, a visual recognition task was implemented in a VR environment for the user study.The visual recognition task was designed to allow participants to naturally observe art images without any specific goal, preventing bias in their gaze data.This was to avoid diluting or missing the effects of WatchCap's stimulation because gaze data are biased by a specific goal of a task [25,37] (i.e., a phenomenon where gaze distribution is concentrated on a target when asked to find a specific object).There were a total of 30 art images used in the task, and only images without text-dense elements were selected, some of which can be seen in Figures 1, 3, and 4. Participants were seated on the chair and asked to use only their head and eyes to view the virtual art images, which has a resolution of 3840 x 1080, simulating the appearance of a 3.5 m x 1 m object positioned 1 m away, for 60,000 ms.
The VR environment was displayed using the Meta Quest Pro [16].It features eye tracking, a 106-degree FoV, and a resolution of 1800 x 1920 per eye.Utilizing this device enabled the collection and real-time analysis of gaze data and the application of gaze-contingent effects in the implementation of LV simulation.

4.2.3
Procedure.This section contains the experimental protocols for Study 1 and 2. The experimental protocols of the two studies were identical, with the exception of the simulated low-vision environment employed in Study 1, which involved participants with sighted vision.The protocol and the data obtained for both studies, including the questionnaires and eye-tracking measures, were approved by the institutional review board (IRB assurance number assigned: 20220628-HR-67-20-04 on July 21, 2022).
Preliminary information and consent.Upon arrival, participants were informed about the potential dizziness and discomfort from VR environment and WatchCap's stimulation.After the experiment was explained (i.e., details in visual recognition task), the participants provided their consent for the data collection.Subsequently, participants' personal data, such as age, sex, vision specificity, and the presence of any visual disorders, were gathered.The experiment began only once it was confirmed that no vision or visual disorders would impede the process.
Preparation for the visual recognition task.To collect high-quality gaze data, we used the gaze calibration protocol of the VR HMD.After the calibration, the participants removed the VR HMD and put on WatchCap.The experimenter adjusted WatchCap to ensure that the vibration was felt at the designated position and confirmed their correct perception of the direction of the illusory force caused by the Hanger Reflex.After ensuring the device's optimal positioning and participants' correct perception, they wore the VR HMD again to begin the visual recognition task.Participants underwent three trials for each of the 10 conditions (in case of Study 1) or 5 conditions (in case of Study 2), following the balanced Latin-square study design (i.e., The 5 conditions of Study 2 comprised four stimulation conditions utilizing the Rot_Eye, Rot_Gaze, Area_Eye, and Area_Gaze algorithms, in addition to one non-stimulation condition without stimulation) and observed a different art image on each trial.
Feedback and comfort assessment.After every trials, participants were interviewed to gauge their comfort level on a scale of 1-10 [19].The experiment was paused temporarily if participants reported fatigue or discomfort that interfered with their ability to continue.The participants were also informed that they could pause or stop the experiment at any point if they experienced severe discomfort.For participants in Study 2, in-depth interviews were conducted following the completion of all trials.

Data Processing.
To validate the hypotheses, a process was undertaken to filter only the scanning phase data from the collected gaze data (see Figure 5).For the gaze data in the stimulation condition, the data were filtered from time stamps marked as the scanning phase by the scanning identification algorithm in real-time.In contrast, the non-stimulation condition did not include scanning phase markings.To address this, we employed the same four scanning identification algorithms to extract the scanning phase from non-stimulation data, thereby employing it as the control condition for each of the stimulation conditions.

Gaze
Behavior Metric Extraction.Seven metrics were used for gaze behavior analysis: 1) Fixation duration, 2) gaze saccade amplitude, 3) head saccade amplitude, 4) SRR for gaze, 5) SRR for head, 6) SRA for gaze, and 7) SRA for head.To compute these metrics, three consecutive fixations and the two intervening saccades were grouped.Specifically, the duration of the second fixation determined the fixation duration, while the amplitude of the saccade between the second and third fixations defined the saccade amplitude.The relative direction of the two saccades, between the first and second and then the second and third fixations, yielded the saccade momentum.

Understanding the Gaze Behavior with Different Scanning Identification Algorithms (RH 1)
. This study's first hypothesis was tested by analyzing the performance of the proposed scanning identification algorithm in detecting the scanning phase, with coefficient K serving as the ground truth.
Krejtz et al. [37] introduced the coefficient K as an effective measure for discerning the states of visual processing.The coefficient K is derived from the difference in the z-scores of fixation duration and saccade amplitude.A negative value of K suggests a scanning (or ambient) phase, while a positive value points to a verification (or focal) phase.This coefficient is beneficial for tracking transitions in visual processing over time.However, its real-time application is limited due to the prerequisite of knowing the user's distribution characteristics for fixation duration and saccade amplitude.
To calculate the coefficient K, we followed the methodology set forth by Krejtz et al. [37] and Guo et al. [25].Gaze data from trials without stimulation feedback were utilized to avoid any influence from stimulation.For a balanced analysis across trials and participants, we calculated the mean and standard deviation of fixation duration and saccade amplitude for all non-stimulation condition trials.The duration of each fixation and the amplitude of the ensuing saccade within every trial (60,000 ms each) were then converted to z-scores for K value calculations.These values were binned in 5,000 ms intervals to classify each bin's visual processing state, using the average K value, as either scanning or verification phase.The total number of binned data was 846, of which 419 were in the verification phase (about 49.5 %) and 427 were in the scanning phase (about 50.4 %).The chosen binning duration was evaluated for normality using the Shapiro-Wilk test.In cases of normality test violations, we assessed the kurtosis and skewness of the K distribution to confirm adherence to a normal distribution.
Following this, the periods identified as scanning phases by our identification algorithm were marked.These were then cross-referenced with the states differentiated by the coefficient K, forming a confusion matrix.For example, if the algorithm correctly detected a scanning phase during intervals with a negative K value, it was considered a true positive.This confusion matrix enabled us to evaluate the accuracy and precision of each algorithm in detecting the scanning phase.

Investigating the Effect of of Stimulation on Gaze Behavior (RH 2).
To assess the effects of stimulation feedback on gaze movements, we compared gaze behaviors between control and stimulation conditions during the scanning phase using the following metrics: 1) Fixation duration, 2) gaze saccade amplitude, 3) head saccade amplitude, 4) SRR gaze, and 5) SRR head.We compared the gaze behaviors under both control and stimulation conditions to identify changes induced by stimulation feedback.
We performed an analysis, stratifying gaze data based on combinations of visual conditions (mild or severe) and scanning identification algorithms (Rot_Eye, Rot_Gaze, Area_Eye, Area_Gaze) as a sub-factors.After confirming data normality, an independent t-test was used to distinguish differences based on the stimulation type (control or stimulation) factor.Assumption violations, as indicated by Levene's test, led to adjustments in the degrees of freedom through Welch correction.

Investigating the Effect of Stimulation on Gaze Behavior and Scanning Efficiency of Participants with LV (RH 3).
To evaluate alterations in gaze movement among participants with PVL, contingent upon the stimulation feedback from WatchCap (RH 3.1) and their individual scanning strategies (RH 3.3), we compared their gaze behavior during the scanning phase using metrics similar to those employed in Study 1.In determining scanning strategies, we differentiated between participants based on questions asked during the interview sessions regarding their usual reliance on head movement versus eye movement when scanning their surroundings.This allowed us to categorize participants into head-dominant and eye-dominant groups.More details on this can be found in the subsequent "Section 6.2.1 Scanning Strategy".These metrics included 1) Fixation duration, 2) gaze saccade amplitude, 3) head saccade amplitude, 4) SRA for gaze, and 5) SRA for head.In case of Study 2, the momentum of saccades was analyzed using the SRA instead of SRR, as SRR measurements can be taken for each trial, which does not provide a sufficient sample size for statistical analysis.
In our analysis, emphasis was placed on examining the effects of stimulation type (control or stimulation) and scanning strategy (eye-dominant or head-dominant) based on each scanning identification algorithm (Rot_Eye, Rot_Gaze, Area_Eye, Area_Gaze).After confirming data normality, a ANOVA was employed based on the stimulation type and scanning strategy as independent variables, and scanning identification algorithm as sub-factor.Assumption violations, as indicated by Levene's test, led to adjustments in the degrees of freedom.The Bonferroni correction was employed for post-hoc tests upon confirming statistical significance.
Additionally, we conducted interviews to gain insights into how WatchCap influenced the visual processing of participants with LV (RH 3.2).The interview questions were centered around the participants' experiences with WatchCap, specifically its impact on gaze movements during tasks, its effectiveness in observing visual stimuli, and their perspectives on potential applications of WatchCap.This methodological approach enabled us to explore the shifts in gaze behavior and the overall influence of stimulation feedback on visual processing experiences.Ultimately, the goal was to explore populations that could receive promising compensation through WatchCap in terms of scanning efficiency, thereby providing insight into its actual usage.

STUDY 1: WATCHCAP VALIDATION WITH SIGHTED VISION USER
Study 1 aimed to validate the design of WatchCap in a transient PVL environment.We first assessed whether the scanning identification algorithms effectively determined the appropriate timing for activating WatchCap.Subsequently, we examined whether WatchCap successfully modified gaze behavior in alignment with the intended design objectives when activated during the participants' scanning phase.

Scanning Phase Identification Algorithm Validation
In this section, we evaluate the performance of the scanning identification algorithm in distinguishing the scanning phase.To achieve this, we utilized the coefficient K to establish the ground truth for the transition between the scanning and verification phases over the time course and analyzed how accurately the scanning identification algorithm distinguished them.
Before analyzing the algorithms with coefficient K, we ensured that the K values, averaged over the selected binning time, satisfied the normality assumption.The Skewness and kurtosis revealed adherence to normal distribution characteristics (mild: skewness = 0.002, kurtosis = 0.574; severe: skewness = −0.116,kurtosis = 0.234), indicating that binning the time course gaze data into 5,000 ms intervals for the calculation of coefficient K was appropriate.We evaluated the accuracy and precision of four scanning identification algorithms by benchmarking them against the coefficient K, which served as the ground truth, across different visual conditions.Under mild visual conditions, the Rot_Eye algorithm achieved the highest accuracy and precision, with values of 0.762 and 0.793, respectively (see Figure 6.The Rot_Gaze algorithm outperformed the others in severe visual conditions, achieving accuracy and precision scores of 0.699 and 0.732 (see Figure 7, respectively.In contrast, regardless of the visual condition, the Area_Eye and Area_Gaze algorithms showed less effective discrimination than did the Rot algorithm. In the case of the recall value of the verification phase (the percentage of the data in the actual verification phase that the algorithm classified as the verification phase), the Rot algorithm was also found to be higher.In the mild visual condition, Rot_Eye and Rot_Gaze had recall values of 0.773 and 0.757, respectively, while Area_Eye and Area_Gaze had recall values of 0.642 and 0. In the severe visual condition, Rot_Eye and Rot_Gaze had recall values of 0.677 and 0.715, while Area_Eye and Area_Gaze had recall values of 0.602 and 0.647.This result means that the Rot algorithms misidentified the verification phase as the scanning phase less often than the Area algorithm.
This trend highlighted the effectiveness of the Rot algorithms in accurately distinguishing the scanning phase.Moreover, their high precision and recall underscore their capability to minimize incorrect stimulation during the verification phase.
The concern regarding misidentification by the Rot_Eye algorithm, due to eye movement (eye-in-head) induced by the VOR, was addressed in Section 3.2.As mentioned therein, because of the characteristics of VOR eye movements, it is unlikely that eye movements would exceed the thresholds of the I-VDT and scanning identification algorithms.Therefore, errors due to VOR eye movements are not anticipated.This is corroborated by the results, wherein the identification performance of both the Rot algorithms appears similar, indicating that the concerned phenomenon did not occur.

Effect of Stimulation on the Gaze Behavior of the Scanning Phase
We explored how the stimulation of WatchCap influenced participants with sighted vision's gaze behavior during the scanning phase.Changes in the fixation and saccade related metrics due to the stimulation were examined.In the following subsections, we highlight the significant changes in each metric using the Rot_Eye and Rot_Gaze algorithms, based on the results in Section 5.1.Detailed statistical analysis, including the results using the Area algorithm, which was found to have poor discriminating performance, can be found in Table 3 in Appendix A. Also, we discuss some discrepancies between the design intentions of WatchCap and the actual observed changes.

Fixation Duration.
For participants experiencing mild simulated visual field loss, in the case of Rot_Eye, the stimulation condition had a significantly shorter fixation duration than the control condition.Also for participants experiencing transient severe visual field loss, the Rot_Eye and Rot_Gaze algorithms highlighted shorter fixation duration in stimulation conditions.

Gaze Saccade Amplitude.
In conditions with mild visual field loss, using the Rot_Eye and Rot_Gaze algorithms, the gaze amplitude in the stimulation condition was larger than the control condition.In contrast, In severe visual field loss conditions, when participants received stimulation feedback from the Rot_Eye and Rot_Gaze algorithms, a notable reduction in gaze amplitude was observed.

Gaze Saccade Momentum.
In participants with mild transient visual field loss, for the Rot_Gaze algorithm, the stimulation condition showed an increase in SRR gaze compared to the control condition, indicating the decreased gaze saccade momentum.Also in the context of severe transient visual field loss, utilizing the Rot_Gaze algorithm, the stimulation condition exhibited an increased SRR gaze compared to the control condition, indicating the decreased gaze saccade momentum.

5.2.4
Head Saccade Amplitude.In the mild visual field loss condition, the Rot_Eye algorithm under the stimulation condition showed a larger head amplitude than the control condition.In contrast, under severe visual field loss conditions, the Rot_Gaze algorithm showed a reduced head saccade amplitude in the stimulation condition compared to the control.

Head Saccade Momentum.
In the conditions of mild visual field loss, for the Rot_Eye algorithm, the stimulation condition demonstrated a reduced SRR head value compared to the control condition.For the Rot_Gaze algorithm, the stimulation condition presented an increase in the SRR head value relative to the control condition.When participants experienced transient severe visual field loss, both Rot_Eye and Rot_Gaze conditions showed no difference in head saccade momentum by stimulation feedback.

5.2.6
Discussion on the Effects of WatchCap on Participants with Transient PVL.In mild visual conditions, notable changes in gaze behavior were observed using the Rot_Eye algorithm.WatchCap's stimulation increased the head amplitude, momentum of saccades, and gaze amplitude while decreasing the fixation duration.This indicates that, per WatchCap's design intention, the gaze movement became longer and straighter.However, when using the Rot_Gaze algorithm, although there was an increase in the gaze amplitude of saccades, a decrease in gaze and head momentum was observed.
Conversely, in severe visual conditions, using the Rot_Eye algorithm decreased the gaze amplitude of saccades.With the Rot_Gaze algorithm, gaze amplitude, momentum of saccades, and head amplitude decreased.This indicates a deviation from WatchCap's intended gaze behavior changes.Notably, the control condition's gaze amplitude significantly exceeded the visual field in severe visual conditions, indicating the scanning strategy of participants with sighted vision.
Overall, in mild visual conditions, WatchCap using the Rot_Eye algorithm successfully altered the gaze movement of participants with transient PVL in line with the design intention.For the Rot_Gaze algorithm, although there was a decrease in saccade momentum, it remained below the commonly accepted threshold for reversal gaze movement (0.05) [5], and an increase in gaze amplitude was observed.This suggests that WatchCap using the Rot_Gaze algorithm also effectively altered participants' gaze movement as intended.

STUDY 2: WATCHCAP VALIDATION WITH PVL USERS
Study 2 aimed to test WatchCap with individuals experiencing PVL to assess its potential use.We analyzed changes in gaze behavior to evaluate the usability of WatchCap.Additionally, specific feedback was collected to examine its usage and discuss possibilities for further improvements.Although our qualitative data reports cover all participants, the quantitative data from one participant were excluded due to incomplete data acquisition.

Effect of Stimulation on Gaze Behavior in the Scanning Phase
In this section, we aim to understand the impact of WatchCap on the gaze behavior of participants with PVL.We analyzed the changes during the scanning phase, considering that each participant had a unique visual processing strategy.This analysis helps determine whether WatchCap facilitated wider scene exploration and whether its effect varied among participants.Detailed results of the interaction effects and post-hoc comparisons for each metrics are presented in Table 4 in Appendix A, And as in Section 5.2, the significant differences when using the Rot_Eye and Rot_Gaze algorithms are outlined in the following subsections.
6.1.1Fixation Duration.Under the Rot_Eye algorithm, the eye-dominant group showed no significant change in fixation duration despite receiving stimulation, while the head-dominant group exhibited a decrease in fixation duration (see Figure 8 -Rot_Eye).In the case of Rot_Gaze, both the eye-dominant and head-dominant groups experienced a reduction in fixation duration with stimulation (see Figure 8-Rot_Gaze), resulting in more frequent scan movements.
6.1.2Gaze Saccade Amplitude.For all participants, the gaze saccade amplitude increased during scanning with stimulation from both the Rot_Eye and Rot_Gaze algorithms (see Figure 9 (A)).This indicates that WatchCap's stimulation feedback influenced LV participants to allow their gaze to travel further during the scanning phase.(see Figure 9 (B)).Thus, stimulation feedback did not have a significant impact on the gaze momentum of LV participants.
6.1.4Head Saccade Amplitude.The head saccade amplitude was influenced by stimulation feedback based on the scanning strategy (see Figure 10).For Rot_Eye, both groups exhibited a significant increase in saccade's head amplitude in response to stimulation.Similarly, the stimulation feedback from the Rot_Gaze algorithm significantly affected both groups' head amplitude.This finding implies that WatchCap's stimulation feedback induced larger head movements regardless of the strategy.
6.1.5Head Saccade Momentum.Like in gaze momentum, head momentum did not exhibit significant changes due to stimulation when combined with the Rot algorithms (see Figure 10 (B)).This finding indicated that the stimulation feedback did not alter the direction of head movement in LV participants.6.1.6Discussion on the Effects of WatchCap on Participants with PVL.We observed that WatchCap's stimulation feedback led to a wider gaze dispersion for both groups.Although we anticipated that the effect of WatchCap might be less pronounced for the head-dominant group, which actively uses head movement, we observed decrease in fixation duration and increase in saccade amplitude across both groups.These changes indicate that LV participants are moving their gaze more rapidly and broadly during the scanning phase compared to their original behavior.
Furthermore, even when using the same algorithm, we noted differences in gaze movement based on participants' scanning strategy.For example, the Rot_Eye algorithm did not induce changes in fixation duration for the eye-dominant group, but it decreased fixation duration for the head-dominant group.This implies that the effectiveness of the algorithm may vary based on the individual characteristics of LV users.

Effect of Stimulation on Visual Processing Experience
In this section, we report the visual processing experiences of participants with LV, focusing on their scanning strategies and interactions with WatchCap.We first explored whether these individuals employ unique compensation strategies while processing visual information to understand the varying effects of our system among participants.We examined the changes in their experiences when using WatchCap to understand the influence of WatchCap on visual processing and to assess its potential in scene representation exploration for LV individuals.
6.2.1 Scanning Strategy.Individuals with LV have developed unique scanning strategies adapted to their visual conditions.A key difference among these strategies is the extent to which they utilize head movement during eye movement, which is known to affect not only gaze behavior but also the performance of tasks based on visual processing [61,62].We recognized this aspect and focused on investigating the participants' scanning strategies, particularly their use of head movement, revealing that each participant had a unique pattern (see Table 2-Scanning Strategy).
Head-dominant Group: Mark, Arnold, Paul, and David predominantly use head movements to secure their field of vision.They actively move their heads to observe their surroundings and fix their heads in place when focusing on specific areas of interest.Arnold, who has glaucoma-induced visual field loss and strabismus, reported difficulties with active eye movements, leading to a more substantial reliance on head movements.Despite the

Changes by stimulation
"When I got the vibration feedback, my gaze moved more quickly, so I could observe art image faster." (Ryan) "I could feel the feedback with certain eye movements, and it felt like it was guiding my gaze purposefully." (Arnold) "Usually, the vibrations happened when I wasn't focusing on a specific spot, and my gaze would move to the left or right along with it." (Paul) "Every time there was a vibration, my focus shifted, but it felt like it was all happening randomly." (Adam) "I felt like the vibration cues were guiding my gaze, making my field of observation expand even more." (David) "When I did the experiment with the vibration, I moved my eyes around more actively than before." (David) VR environment "The pictures in the VR felt so real that I could get into it and focus on the experiment." (Paul) "Since one of my eyes doesn't see well and I can't really feel the depth, the 2D pictures didn't feel too weird for me." (Adam) "It felt so real, like the real world, because the details were super high-quality, so I could really focus." (Adam) "It was my first time using VR, and I had trouble with the dazzling pictures." (David) Discomfort "While doing the task, I perceived the vibrations as an external stimulus." (Ryan) "Usually, I interpreted the vibration as a sign to stop, almost like it was controlling my gaze to not move any further sometimes." (Arnold) "The vibrations didn't make me feel uncomfortable or annoyed, but not knowing when they would happen made me feel scared." (Paul) "The vibrations seemed to happen randomly, which was a bit unpleasant, but I tried to ignore them and focus on the experiment." (Adam) Potential use

WatchCap (current version)
"Since my gaze moved faster and I could observe pictures quicker, I think this could be truly useful in daily life too." (Ryan) "The concept of guiding my gaze to look wider appears to be it would be truly helpful for patients with vision deficits." (Arnold) "Expanding the view like this, I believe, would be really useful for patients with peripheral vision loss." (Mark) "With sound, I have to figure out what it means, but this kind of feedback makes me react right away, so it seems more useful." (Paul) "I often feel tired from sound notifications, and they sometimes get in the way of my work." (David) "My vision isn't great, so my other senses are pretty sharp.I found vibration cues to be super helpful." (David) Future application "It appears to be it could help people who need to control their gaze to a specific area, like test takers, or those who need to have a wide field of view, like drivers." (Ryan) "If it guides drivers to areas they need to pay attention to, I think it could be truly useful." (Ryan) "As the task went on, I got used to the vibration feedback, but I think it would be necessary to try changing the intensity or pattern." (Arnold) "For patients with vision loss who are driving, constantly forcing them to recognize traffic information could be a huge help." (Arnold) "I often have trouble checking out my surroundings because of my vision problems.It'd be great if they used these cues to guide directions." (Paul) "Because of my narrow field of view, sometimes I don't notice things around me.If something made me turn my head to look, that would be a big help." (Adam) challenges of accessing impaired peripheral vision and additional visual conditions, the participants of headdominant group have developed strategies to compensate for these challenges by using more head movement for broader peripheral observation.
Eye-dominant Group: Conversely, Ryan and Adam, showed a greater reliance on eye movements for scanning their surroundings.Ryan primarily utilizes eye movements for peripheral vision access and resorts to head movements only when observing areas beyond the immediate view.For example, Ryan maintains a forwardfacing head position while driving, using eye movements to check side mirrors or monitor external traffic.This finding suggested that Ryan's scanning strategy is predominantly based on eye movement-driven exploration, supplemented by minor head movements extending the gaze beyond the immediate field of view.

Effect on Visual
Processing.WatchCap's stimulation feedback was designed to induce head rotation to alter users' gaze movement.Our gaze behavior analysis confirmed that users' gaze changed in response to this stimulation, aligning with WatchCap's intended design.To further understand the impact of these changes, we collected and examined each participant's feedback, focusing on how these gaze changes facilitated scene exploration and expedited the processing of visual information (see Table 2-Visual processing experience).
Ryan reported experiencing a natural movement of their gaze due to the head rotation created by the stimulation feedback.Specifically, Ryan noted that the vibratory stimulus occurring during gaze movement causes own head to move, leading the gaze to move further and faster in the ongoing direction, allowing Ryan to scan art images more quickly while performing the visual recognition task.This finding indicates that Ryan experienced a wider range of gaze movement due to the stimulation feedback and a faster scanning experience in the task scene.
Arnold also reported experiencing head rotation triggered by stimulation, leading to a natural change in view.Arnold felt that the feedback was purposefully rotating the head, guiding the gaze toward areas designated by the system.Similarly, Mark observed that stimulation encouraged broader head rotation, enhancing the reach of the gaze.The vibratory stimulus, in sync with the gaze movement, augmented the direction of the participants' gaze, extending its range.
These observations confirm that the stimulation feedback from WatchCap naturally modified participants' gaze movements, as evidenced by both gaze behavior analysis and participant interview feedback.In particular, feedback indicating that their gaze moved further suggesting that WatchCap effectively assisted broader scene exploration for participants with LV.Additionally, the feedback that they could scan scenes faster due to the stimulation indicates that using WatchCap could facilitate faster scanning, thus accelerating the formation of scene representations.

Potential Use of WatchCap.
We observed that WatchCap altered the visual processing of participants with PVL, through changes in gaze behavior and interview feedback.However, beyond these changes in visual processing, it is essential that the device also has practical utility in helping individuals with LV more smoothly perform various tasks encounter in daily life.Therefore, we collected additional interview feedback from participants with PVL regarding the potential use of the device (see Table 2-Potential use).
All participants responded that WatchCap could be usefully employed in the daily lives of individuals with PVL.Ryan, Arnold, and Mark mentioned that the stimulation provided by WatchCap enabled efficient scanning, which could be beneficial in daily life.Additionally, Paul and David highlighted the advantages of the rotational force perception utilized by WatchCap.Paul noted that the stimulation from WatchCap is intuitive and allows for immediate response, offering advantages over audio cues that require interpretation and action.Through such feedback, WatchCap's approach of leveraging the perception of rotational force to alter visual exploration holds practical applicability in daily life and imposes less psychological demand compared to other modalities.
Furthermore, participants suggested potential applications leveraging WatchCap's stimulation to direct attention towards task-relevant information.Feedback highlighted driving scenarios as prime examples, where WatchCap could be used to either maintain a broad FoV or to navigate driver's focus towards crucial traffic details.For instance, Ryan discussed the benefit of WatchCap's ability to widen the FoV in driving situations that demand extensive visibility, or using its stimulation to draw attention to areas requiring immediate focus.Arnold also mentioned that utilizing WatchCap's stimulation to focus the driver's attention on critical traffic information could be beneficial.Adam envisioned an application where WatchCap's stimulation would guide the user's gaze towards the obstacles that might otherwise be overlooked due to a restricted FoV.These insights underscored the potential of WatchCap to steer individuals with LV towards task-relevant information by using the perception of rotational force as a novel intuitive cue.

DISCUSSION 7.1 Validation of Scanning Identification Algorithm
In this research, it was crucial that stimulation occurred precisely during the scanning phase, characterized by the exploration of new scenes.Conversely, stimulation during the verification phase, marked by deliberate and targeted movements, must be avoided.To address this, we developed four distinct scanning identification algorithms, each tailored to determine the optimal timing for stimulation.To ensure that the moments classified as the scanning phase by these algorithms were indeed so, validation of the algorithms was conducted.Specifically, the ground truth for the scanning phase was labeled using the coefficient K [37], and the results classified by the algorithms were compared for validation.This validation process focused on both the accuracy and precision of the algorithms.It was essential not only to identify the scanning phase correctly but also to ensure that the verification phase was not erroneously classified as the scanning phase.
7.1.1Successful Identification Algorithms: Rot_Eye and Rot_Gaze.Among the proposed algorithms, the Rot_Eye and Rot_Gaze algorithms demonstrated superior identification performance.Specifically, in both mild and severe visual conditions, these two algorithms exhibited higher accuracy than did the others.This finding indicates the effectiveness of these algorithms in discerning the appropriate timing for activating WatchCap, suggesting their suitability for use in WatchCap.Additionally, both algorithms achieved high precision and recall value.These are crucial for preventing the occurrence of stimulation during the verification phase, which could disrupt visual processing, further confirming their appropriateness as detection algorithms.

Effect of Stimulation Feedback on Gaze Movement
We examined whether WatchCap, implemented with the validated Rot_Eye and Rot_Gaze algorithms, altered users' gaze movements wider.Gaze movements accompanied by head rotation typically exhibit longer amplitude and stronger momentum in saccades, enabling broader scene exploration during the scanning phase [15].In this study, we activated WatchCap during the scanning phase to induce gaze movements accompanied by head rotation and observed changes in gaze behavior.These changes were consistent in both parts of our study: Study 1, which involved participants with a sighted vision in a simulated environment for testing WatchCap, and Study 2, where we evaluated the applicability of WatchCap with participants who had PVL.

Effect on Gaze Movement of Participants with Sighted Vision.
In our initial phase, we conducted tests in a simulated environment designed to mimic PVL utilizing WatchCap with participants who had sighted vision.In a mild vision loss simulation, the stimulation from the Rot_Eye algorithm increased gaze, head amplitude, and head momentum while reducing fixation duration.When the Rot_Gaze algorithm was used, an increase in gaze amplitude was observed.These findings validate WatchCap's design strategy in a simulated environment, providing a foundation for the subsequent study of individuals with LV. 7.2.2Effect on Gaze Movement of Participants with LV.The observed effects were also apparent in a user study involving individuals with PVL.We conducted this study to evaluate the usability of WatchCap among patients with glaucoma using a similar approach to analyze gaze behavior.When the Rot_Eye algorithm was implemented, participants from both groups demonstrated increased amplitude in both gaze and head movements.Similarly, with the Rot_Gaze algorithm, all participants with PVL experienced increased saccade amplitude and decreased fixation duration.These changes in gaze behavior imply that gaze dispersion became increasingly wider.Consequently, WatchCap's stimulation feedback significantly impacted the gaze movement of individuals with LV, facilitating more extensive exploration of scenes during the scanning phase.7.2.3Improvement on Visual Processing Performance.This alteration in gaze movement not only accelerates scene exploration but also potentially enhances visual processing performance.According to Phillips et al. [61], performance in visual processing tasks is determined by the perceptual span, a measure indicating the extent of task-relevant information gathered during scanning.Furthermore, Phillips et al. [62] noted that the perceptual span and the resulting visual processing performance depend on saccade-related metrics.In these studies, the saccade amplitude was the metric used, suggesting a correlation between faster exploration, which enables the collection and processing of information across a broader area, and enhanced visual processing performance.This finding is further corroborated by interview feedback received from participants with PVL, reinforcing the connection between broader exploratory behavior and enhanced visual processing capabilities.

Applications of WatchCap
WatchCap, with its innovative use of rotational force perception, can be utilized in two key ways: 1) Inducing wider and faster scene exploration to improve scanning in a task-independent manner, and 2) Guiding the user's gaze towards task-relevant information to assist in task performance.
The first approach leverages WatchCap to assist individuals with PVL in enhancing their scanning in a manner that independent on specific task.Individuals with PVL often experience compromised visual fields and scanning capabilities, which can detrimentally affect not only their visual processing but also their orientation and mobility [40,41,51].Mobility challenges, in particular, include critical issues in daily life such as wayfinding, obstacle detection, and avoiding people.Therefore, if WatchCap can encourage individuals with PVL to explore a wider scene, it could mitigate scanning and mobility problems and provide independent assistance in their daily lives.Furthermore, when combined with devices tailored for particular tasks, the assistance provided could be even more effective.For instance, integrating it with the indoor augmented reality (AR) navigation system suggested by Zhao et al. could be beneficial [83].Even if PVL individuals struggle to promptly recognize AR cues due to a limited FoV, WatchCap's induced broader exploration might enable them to detect cues more swiftly than before.
The second approach, as suggested by participants in Study 2, involves harnessing the perception of rotational force as a new modality for conveying task-relevant cues.Prior research has facilitated performing task for individuals with LV by providing task-specific information through audio descriptions or issuing warnings and notifications via audio and haptic feedback [24,80].However, these methods often impose cognitive burdens due to the necessity of interpreting the feedback.In contrast, the perception of rotational force, as utilized by WatchCap, instinctively prompts immediate physical response, bypassing the need for cognitive interpretation.This unique characteristic of WatchCap's stimulation could serve as a means to direct users' gaze towards task-relevant information, enhancing task engagement and performance.For instance, an application envisioned by LV participants involves directing drivers' attention to crucial traffic details, thereby promoting safer driving.While further research is needed to compare or integrate the perception of rotational force with auditory or haptic feedback comprehensively, the potential of using it as a distinctive modality in assistive devices for LV individuals is promising.invoke.However, our study's measurements were more indirect, as we focused on gaze and head behavior rather than directly assessing visual processing performance defined by specific goals.Therefore, to fully ascertain WatchCap's effectiveness in aiding daily tasks, its impact should be evaluated across various tasks that pose challenges to individuals with LV.These tasks include navigation, hazard detection, and object recognition [21].Such an exploration would broaden the generalization of WatchCap's utility in assisting with everyday activities.7.4.2Generalization of the findings of Study 2. Study 2 expanded our research from transient visual field loss to actual LV individuals.While results mostly mirrored those from Study 1, we observed variability in the scanning strategies employed by people with LV.However, categorization is not binary, and the degree of reliance on eye versus head movements, along with varying levels of visual impairment, is diverse.Additional factors, such as age, visual cognitive processing, duration of visual field loss experience, and other symptoms, may influence the suitability of algorithms for stimulating individuals.Consequently, while the current system proved effective for the participants in our study, its use should be customized for individuals, taking into account their unique conditions and compensatory strategies.

CONCLUSION
In this study, we explored the compensatory effects of head movement as a means to aid individuals with PVL during visual recognition tasks.We introduced WatchCap, a device that leverages the hanger reflex phenomenon.Recognizing the start of a user's scanning-related gaze behavior prompts head compensation behavior via vibrational stimulation.Our findings demonstrated that WatchCap enabled broader and more rapid scanning by compensating for participants' gaze movement through the use of illusory force to drive head rotations.We anticipate that our work, when integrated with existing aids designed to enhance visual information accessibility, holds promise.This integration will not only serve to improve scanning abilities in ambient situations through WatchCap alone but also synergize with other technologies, assisting the daily experiences of those with LV, especially when paired with cue-based or detail-oriented visual accessibility solutions.

Fig. 1 .
Fig. 1.Users experiencing peripheral vision loss (PVL) perform a visual recognition task while wearing WatchCap.(A) Using the Meta Quest Pro, we simulated an environment where even users with sighted vision can interact with visual stimuli while experiencing two types of transient PVL.(B) WatchCap, implemented using the Double-Hanger Reflex, can induce two directional rotations in the user's head.By generating vibrations at the indicated yellow points, both counterclockwise and (C) clockwise head rotations are facilitated.

Fig. 4 .
Fig. 4. The virtual environment used for the experiment (left) and simulation with different extents of visual field loss (middle: mild, right: severe)

4. 1 . 1 RQ 1 :
How accurately does the proposed scanning identification algorithm differentiate the scanning phase?RH 1: The scanning identification algorithm can distinguish the appropriate phase for activating WatchCap.

4. 1 . 3 RQ 3 :
Does WatchCap enhance the scanning efficiency of individuals with LV?. RH 3: WatchCap will increase the scanning efficiency of participants with LV RH 3.1: WatchCap will change the gaze behavior of participants with LV in scanning phase.RH 3.2: WatchCap will enhance the experience of participants with LV during the visual recognition task.RH 3.3: The influence of WatchCap is anticipated to vary depending on the unique scanning strategies employed by participants with LV.

Fig. 5 .
Fig. 5. Data processing flow and comparison for analysis

Fig. 6 .Fig. 7 .
Fig. 6.Confusion matrices of the four scanning identification algorithms under mild visual conditions, with coefficient K as the ground truth

6. 1 . 3
Gaze Saccade Momentum.WatchCap's stimulation feedback when combined with the Rot algorithms, did not induce significant changes in gaze momentum for any of the LV participants during the scanning phase Proc.ACM Interact.Mob.Wearable Ubiquitous Technol., Vol. 8, No. 2, Article 47. Publication date: June 2024.

"
I truly struggle with moving my eyes to follow things that are moving.It's pretty tough for me (Arnold)." "When I'm trying to look around, I usually turn my head to change my view.In addition, when I need to focus on something specific, I turn my head in that direction and keep my eyes locked on it." (Arnold) "In my daily life, I mostly move my head to look around, instead of just moving my eyes.It helps me see things better." (Mark) "When I'm looking around at a new environment, I usually move my head to observe my surroundings.(Paul)" "My field of view is very limited, it's hard for me to smoothly observe my surroundings, so I move my head to explore the environment around me. "(David)    Eye dominant "I keep my head facing straight ahead and just move my eyes to look around." (Ryan) "When I'm driving, I keep my head straight but move my eyes to stay aware of the traffic around me. " (Ryan) "My right eye sees well, I usually move my eyes to look around and observe my surroundings."(Adam)    Stimulation feedbackEffect of feedback"When I looked in a certain direction, the vibration made my head move, and that made my eyes move faster." (Ryan) "The vibration made my head move naturally, and that led to my gaze following along, changing my view." (Arnold) "There was a vibration while my gaze was shifting, and it naturally changed my view." (Arnold) "When I moved my eyes, vibration activated, and it naturally encouraged me to move my head more." (Mark) "When vibration is activated, my head naturally moves to the left or the right." (Paul) "While focusing on the experiment, I unconsciously accepted the vibration stimulus, and my head moved naturally in response."(David)

7. 4 . 1
Generalization of findings across different tasks.Our task design intentionally focused on task-independent, ambient scanning to minimize individual differences related to top-down attention that specific tasks may Proc.ACM Interact.Mob.Wearable Ubiquitous Technol., Vol. 8, No. 2, Article 47. Publication date: June 2024.

1 :
←  2: while Eyetracking do 4.1.2RQ 2: Does WatchCap modify the user's gaze movement?RH 2: WatchCap's feedback will change the gaze behavior of participants with transient PVL for wider scene exploration.
RH 2.1:The fixation duration will decrease with stimulation feedback.RH 2.2: The saccade amplitude will increase due to stimulation feedback.RH 2.3: The saccade momentum will increase following stimulation feedback.RH 2.4: The effect of stimulation feedback will remain consistent regardless of visual conditions.

Table 1 .
Demographic information of the participants with LV 4.2.1 Participants.For Study 1, 15 participants (age; M = 19.47,SD= 0.88; male = 4; female = 11) with sighted vision were recruited.Participants who used optical correction devices such as lenses or glasses participated in the experiment with their devices on.The participants participated in the experiment approximately one hour and received 12 USD.For Study 2, 6 participants (age; M = 29.00,SD = 8.65; male = 6) diagnosed with PVL were recruited (Table

Table 2 .
Thematic analysis of interviews with participants with LV