GlassMessaging

Communicating with others while engaging in simple daily activities is both common and natural for people. However, due to the hands- and eyes-busy nature of existing digital messaging applications, it is challenging to message someone while performing simple daily activities. We present GlassMessaging, a messaging application on Optical See-Through Head-Mounted Displays (OHMDs), to support messaging with voice and manual inputs in hands- and eyes-busy scenarios. GlassMessaging is iteratively developed through a formative study identifying current messaging behaviors and challenges in common multitasking with messaging scenarios. We then evaluated this application against the mobile phone platform on varying texting complexities in eating and walking scenarios. Our results showed that, compared to phone-based messaging, GlassMessaging increased messaging opportunities during multitasking due to its hands-free, wearable nature, and multimodal input capabilities. The affordance of GlassMessaging also allows users easier access to voice input than the phone, which thus reduces the response time by 33.1% and increases the texting speed by 40.3%, with a cost in texting accuracy of 2.5%, particularly when the texting complexity increases. Lastly, we discuss trade-offs and insights to lay a foundation for future OHMD-based messaging applications.

Fig. 1.Interactions in GlassMessaging.The user undertakes a sequence of steps: 1) receives a message notification from Peter; 2) utters the voice command 'PETER', which prompts the contact list to automatically scroll to and open the chat interface of "Peter"; 3) invokes the voice command 'VOICE MESSAGE', thereby activating voice dictation (or alternatively, opening the keyboard by pressing the button in mid-air); 4) starts dictating the reply message; 5) issues the voice command 'SEND', resulting in the message being sent.
Communicating with others while engaging in simple daily activities is both common and natural for people.However, due to the hands-and eyes-busy nature of existing digital messaging applications, it is challenging to message someone while performing simple daily activities.We present GlassMessaging, a messaging application on Optical See-Through Head-Mounted Displays (OHMDs), to support messaging with voice and manual inputs in hands-and eyes-busy scenarios.

INTRODUCTION
The pervasive use of mobile devices has substantially augmented our communication abilities, facilitating instant connectivity irrespective of time and location.As a result, messaging applications have grown immensely popular, with WhatsApp, Telegram, Snapchat, and Messenger among the top 10 most downloaded mobile applications [20].
However, seamlessly integrating such applications into daily routines remains a challenge.Engaging with mobile messaging during daily activities such as preparing food, walking, or jogging can be difficult.The inherently handheld design of current messaging applications often necessitates considerable visual and manual engagement, thereby interfering with other tasks that demand similar attention.Despite these challenges, people persist in messaging while participating in various activities, as underscored by a study showing that 13% of text messages are sent while on the move [6].This indicates an unwillingness among users to relinquish the benefits of mobile messaging even when multitasking.
Despite the introduction of alternative input methods, such as dictation, to mobile devices, these haven't entirely remedied the issue.The need to physically hold the device and concerns about convenience, privacy, and social perception have led most users to favor touchscreen typing over voice input [17,26].
How can we improve mobile messaging for efficient communication during daily multitasking?Optical See-Through Head-Mounted Displays (OST-HMDs, OHMDs) or AR smart glasses [42] enable heads-up interactions with digital information while preserving awareness of the surrounding environment [52,59,84].Moreover, since there is a lack of comfortable touch typing mechanisms [1], OHMD is more natural to afford hands-free voice input, which reduces the potential conflicts with daily activities.Therefore, examining the use of OHMDs for multitasking with messaging and their advantages over conventional phones is a promising avenue for research.
In this paper, we first conducted a survey and an observational study to gain a deeper understanding of the current needs, practices, challenges, and limitations users face with mobile messaging while multitasking.Based on these findings, we iteratively designed a messaging application for OHMDs, named GlassMessaging, and compared it with the Telegram 1 application on mobile phones in a controlled study during daily multitasking situations.Our results suggest that, despite the current technological constraints of the OHMD platform, GlassMessaging improved access to voice input and facilitated more seamless interactions compared to phones, reducing response time by 33.1% and boosting texting speed by 40.3%.This points to the considerable potential of OHMDs as a valuable supplement for mobile phone-based messaging during multitasking.
Nonetheless, several limitations need to be addressed before the full potential of this platform can be realized.For instance, the use of GlassMessaging led to a 2.5% decrease in texting accuracy, particularly when the complexity of the text increased.Additionally, current OHMDs have certain inherent disadvantages (e.g., underdeveloped hardware capabilities, unfamiliar usage, lack of interactions [42,50,73]) compared to the extensively tested and well-established mobile phones presently available on the market.
The contribution of this work is threefold: • Enhanced understanding of mobile multitasking messaging behavior through survey and observation study.• An iteratively designed OHMD messaging application, GlassMessaging, with specific design features and guidelines that can inspire future OHMD-based messaging interfaces.• An empirical evaluation comparing GlassMessaging to mobile phone-based messaging that reveals their trade-offs and demonstrates that GlassMessaging -despite the current technological limitations -has great potential to better accommodate people's daily communication needs in ubiquitous multitasking scenarios.

RELATED WORK 2.1 Multitasking with Messaging
People commonly talk to each other and sustain social interactions while performing daily tasks face-to-face.This social interaction is also mirrored when using messaging applications such as WhatsApp, Telegram, Slack, WeChat, etc., facilitating togetherness, intimacy, and support among users [13,31].For example, research has shown that using mobile instant messaging can improve student engagement and interactions during group discussions [72].Nevertheless, the requirement for hands and eyes in the use of messaging apps can limit this type of social behavior, especially during multitasking activities.Despite these challenges, people often engage with these applications while undertaking other activities such as walking, eating, or commuting.For example, a study of 60,000 text messages in the US found that 13% of the messages were sent while on the move [6].Simultaneous conversations with multiple contacts and consumption of different media types are also common during messaging [6].Furthermore, the expectation of a prompt response exerts social pressure on users, leading to frequent and unavoidable multitasking [4,58,60].
Previous research has investigated messaging/texting behaviors in various contexts, but limited research exists on understanding user requirements and improving behaviors during multitasking [6,15,31,32,58].Hence, this study examines the messaging behaviors and requirements in everyday multitasking scenarios.

OHMD and Multitasking
The mobile phone, with numerous messaging applications available such as WhatsApp, Telegram, and WeChat, is a prevalent and convenient platform for multitasking [5].However, it may be unsuitable for multitasking in scenarios requiring attention to surroundings or ongoing activities.For instance, texting while walking can lead to distractions and increased falling risk [36].In driving scenarios, it can impair drivers' attention and increase accident risk [10,61,69,71].
Conversely, Optical Head-Mounted Displays (OHMDs) can potentially enable better multitasking due to their hands-free nature and enhanced situational awareness [9,11,23,33,42,43,[50][51][52]59].For example, using OHMDs for messaging can mitigate the distracting cognitive demands during driving compared to mobile phones [38,39,68 However, current messaging apps for OHMDs, like Vuzix Blade's WeChat2 , are adaptations from mobile interfaces and lack design specificity for OHMD usage.These apps often feature a compact layout and an opaque interface, making them difficult for OHMD usage.Google Glass XE (2013-2017) [29,30,77] was designed for OHMDs but lacked formal evaluation in the literature and did not provide substantial contextual information for messages (see details at sec 6.3).This study explores the unexplored use case of OHMD messaging to support multitasking in daily scenarios.

Hands-free Text Entry on OHMD
The hands-free nature of OHMDs provides opportunities to support multitasking through hands-free texting inputs, such as voice input, head movement, and gaze input [50].Among these, voice input is among the most promising strategy, proving especially useful when the availability of hands, eyes, keyboard, or screen is limited, or when natural language interaction is preferred [16].It causes less visual distraction and offers a higher input speed than other methods, such as typing [49,66].Voice input can be a voice recording (recording voice messages) or voice dictation (transferring voice messages to text in real-time) [64].Studies suggest that voice dictation improves the response rate and task completion efficiency without affecting multitasking levels compared to recording [64].
While head, tongue, and gaze inputs are potential methods for multitasking, ergonomic fatigue makes head movements unsuitable as a primary text input source [46,50].Current gaze input implementations on OHMDs are error-prone and require excessive calibration, making them impractical for daily multitasking with messaging scenarios [8,23].Thus, among hands-free input methods, voice input is the most reliable for enhancing multitasking support, notwithstanding potential drawbacks such as misrecognition and accidental triggering [48,81].

STUDY 1: FORMATIVE STUDY ON UNDERSTANDING USERS' MESSAGING BEHAVIORS
We conducted a formative study to comprehend current messaging behaviors and related pain points during multitasking, guided by the following research questions.

• RQ1:
In what daily scenarios is messaging commonly performed?• RQ2: What are the common features, existing practices, and difficulties of messaging in these scenarios?
To answer our research questions, we conducted a survey to identify common messaging scenarios and then used an observational study to identify messaging behaviors.

Survey
We designed a questionnaire to assess messaging behaviors during daily activities (ADLs [22,25]) and identify common scenarios.The questionnaire, consisting of 18 Likert-scale questions on messaging frequency during ADLs and an open-ended question on other multitasking scenarios, was posted on a university forum.We received 43 responses (17M, 26F; mean age = 26.1 years), with an average completion time of around 5 minutes.We excluded two incomplete responses.
3.1.1Finding: Most Common Messaging Scenarios.After analyzing the survey, we narrowed down the most frequent scenarios to the top five (see Figure 2).However, we excluded the "online meeting" scenario since users tend to multitask on their laptops during the meeting, and our focus was on supporting mobility scenarios, specifically messaging on phones.We combined "walking" with the "transportation" scenario and disregarded "toileting" due to privacy concerns, despite its frequency.Hence, we selected two messaging-intensive multitasking scenarios, Eating Alone and On Transportation, for further observation.

Observational Study
3.2.1 Participants.After identifying the two most common messaging scenarios, we conducted a series of interviews and observations.We invited ten (6M, 4F; mean age = 25.3 years) university students (P1-P10) who frequently use messaging apps for social and learning purposes.All had over nine years of experience with WhatsApp, WeChat, or Telegram for mobile and desktop messaging.Each session lasted 30-60 minutes, with participants receiving a reward equivalent to USD 7.

Process.
We observed users' messaging behaviors and attention division between screens and surroundings while they ate alone or commuted by bus, to understand their usage of familiar applications.We encouraged verbalization of actions and self-reporting.We also conducted semi-structured interviews for further insight into their messaging habits and real-life usage.These interviews were audio-recorded, transcribed, and qualitatively analyzed.

Data Analysis.
One co-author analyzed interview transcripts and observation notes using thematic analysis described in Braun and Clarke's methodology [7].After familiarizing with the data and generating initial codes, the coauthor grouped the codes into common themes based on the content.Next, with the help of an additional co-author, they discussed, interpreted, and resolved discrepancies or conflicts during the grouping process.Finally, they reviewed the transcripts and audio recordings to extract specific quotes relevant to each theme (See Appendix A-Figure 11 for the themes and codes).

Findings.
We identified users' messaging and multitasking behaviors, messaging practices and features, and difficulties faced when messaging and multitasking.
Common Messaging and Multitasking Behaviors.People's messaging and multitasking behaviors differ in the two identified scenarios, which represent general hands-busy and mobility scenarios in which people cannot exclusively concentrate on messaging as their main focus lies on the primary tasks.To limit our scope in the transportation scenario, we only observed participants walking, standing, and sitting on the bus.Firstly, when standing on public transport, users typically lean on handrails for support while using their phones: 1) with both hands for viewing and composing messages, or 2) with one hand for viewing and the other alternating between holding the handrail and messaging.Once feeling safe or stable, they prefer using both hands for quicker typing.Similarly, while seated, users typically use both hands for messaging, resting one or both elbows on their waists for comfort (P1-P10).
Secondly, when people use messaging apps while eating alone, they often eat with one hand and use the other for the phone.They use both hands for efficient messaging when necessary and accept slower, less accurate one-handed use for non-urgent messages."When using one hand [to type], I feel the screen is a bit wide, so I have to lean my phone to my side a bit and touch the keys on another side of the keyboard (P9)".
In both scenarios, people like to complement their phone usage with a smartwatch, if available.They check the notification on the smartwatch, determine the message's urgency, and pull out their phone if they choose to reply.
We observed that for formal messages, users typically use two hands to compose well-structured sentences and avoid errors, whereas for casual messages, they prefer quick, shorter replies split into multiple messages.
Common Features and Practices for Messaging.By identifying the most important and frequently used features of mobile messaging applications, we could prioritize them in future designs, ensuring that users can easily access all needed functions and not change their fundamental messaging style.The messaging process generally involves three key interfaces: notification, chat screen (one-to-one or group), and chat selection.Users primarily engage with the notification interface.Here, users determine whether to open the corresponding chat by viewing the sender's name and avatar in the pop-up.In one-to-one chat screens, users typically send texts and use stickers or emojis to express feelings.In group chat screens, users recognize members by names instead of avatars, given the frequent avatar changes.For chat selection, users generally scroll through chats or use the search bar for less familiar contacts.
We noted that viewing and replying to messages were the most common activities.Users, when focused on their primary tasks, tended to avoid high-attention-demanding activities like initiating new conversations.The process of viewing and replying, as described by participants, is detailed in Appendix A-Figure 10.
Common Difficulties Faced when Messaging and Multitasking.Apart from their behaviors and common practices, people face several difficulties when messaging and multitasking: • Hands-busy: Primary tasks may slow down or cease due to occupied hands.Because people need to stop eating to compose messages with two hands, they may slow down or pay less attention to the food, thus reducing their enjoyment of the meal.In addition, some might need to use both hands while handling utensils (P3, P6); thus, they find it hard to free their hands for messaging.Similarly, messaging on a crowded bus is challenging, as one or both hands are needed for stability (P1-P10), restricting messaging.This temporarily discourages bus passengers from messaging (P9).• Eyes-busy: Switching limited attention between device and environment.When occupied by their primary task, participants had little capacity to think thoroughly about their messaging.For example, while waiting for a bus, frequent attention shifts between the screen and the arriving bus were necessary, causing some participants to fear missing the bus due to excessive messages (P2, P3, P9).Similarly, messaging while walking on a busy street often requires individuals to slow down or halt in a safe place, diverting their attention from their surroundings to their phones.• Other difficulties.Participants also reflected on other issues: prolonged screen time in moving vehicles induced dizziness (P1, P3, P9).Participants subtly tilted their phones to protect their privacy when viewing personal messages or photos (P3, P4).During meals, screen glare and reflection, especially when phones were on a table, compounded visual difficulties.This effect was amplified for those with poor eyesight (P1).

STUDY 2: ITERATIVE DEVELOPMENT OF A MESSAGING APPLICATION FOR OHMDS
Study 1 identified limitations in messaging capability while multitasking on existing mobile platforms.In response, we iteratively developed a messaging application specifically for OHMDs, demonstrating promise for multitasking across various contexts (sec 2.2).This section focuses on evaluating the feasibility of one-to-one text messaging on OHMDs during multitasking (excluding multimedia content or group conversations), as it is the most common format (sec 3.2.4,[74]).

Iteration 1
4.1.1Exploration of Existing Applications.Since we did not find any existing messaging applications designed explicitly for OHMDs, we utilized two popular mobile messaging apps (Telegram and WhatsApp) as proxies to inform the design of messaging apps on OHMDs.We installed Telegram on Vuzix Blade3 and Epson BT-300 4 , and mirrored Telegram and WhatsApp on Microsoft HoloLens 2 (HL2) 5 (using Mirage6 ) and Nreal Light7 (using USB-C display connection8 ). Figure 3 shows the interfaces.We asked four volunteers (2M, 2F; mean age = 22.9) with more than five years of mobile messaging experience to compose, view, and reply to messages using each OHMD while sitting, standing, and walking.They used the think-aloud protocol to identify usability issues and shared their messaging experiences on each device.Before each device testing, the participants were trained on the supported interactions, such as the 2D touchpad on the right temples for Vuzix Blade, the hand-held 2D touchpad for BT-300, the 3D hand controller for Nreal Light, and the 3D mid-air gestures for HL2.
4.1.2Insights for Iteration.Overall, participants found the UI of the messaging apps to be generally intuitive but not well-tailored to OHMDs, resulting in some usability issues.The non-transparent and cluttered UI layout with long chat history blocked eyesight, the color scheme was unsuitable for near-eye displays (e.g., the light mode was too bright, the dark mode was too transparent), and the font and buttons were too small.Interactions were not intuitive (e.g., 3D hand controller) and caused fatigue (e.g., mid-air gestures with large hand movements), and text entry with virtual keyboards was challenging.

Iteration 2
The focus of Iteration 2 was on adapting the UI and interactions of existing mobile messaging apps for OHMDs.To tackle the difficulties associated with virtual keyboards and support hands-busy situations, we added voice dictation for text entry and voice commands for hands-free UI navigation.We also incorporated ring mouse interaction for quicker and more accurate scrolling and selection [65,67], while retaining mid-air gestures for their "intuitive" touch-like content manipulation paradigm.These updates aimed to support non-invasive multitasking scenarios on OHMDs.
4.2.1 Apparatus.We selected Microsoft HoloLens 2 (HL2), an OHMD with hand-tracking, voice commands, and world-scale positioning (2k resolution, 52°diagonal FoV), to develop GlassMessaging, our messaging app designed for OHMDs.A wireless ring mouse (Sanwa Supply 400-MA077) facilitated easy directional UI element selection (Figure 5).We developed GlassMessaging using Unity 3D (2021.3.6f1) 9 , Mixed Reality Toolkit (MRTK 2.8)10 , leveraging MRTK's built-in functions for mid-air gestures, voice inputs, virtual keyboard 11 , and content stabilization.To simulate a realistic messaging experience, we implemented a virtual chat server using Python, running on a tablet computer (Surface Pro 7+) connected to the HL2 via Wi-Fi, enabling bi-directional communication through a socket connection between the client and the server.For implementation details, see https://github.com/NUS-HCILab/GlassMessaging4.2.2Interface Design.To enhance learnability and maintain consistency [56] with familiar interfaces, we chose to modify the UIs of existing mobile messaging apps and tailor them to OHMDs, instead of entirely redeveloping them.The initial interface is shown in Figure 4a.
Main Interfaces.Our design focused on the three primary interfaces identified in study 1 (sec 3.2.4): the chat screen, the contact list, and the notification.We positioned the chat screen at the middle-center, notifications at the top-center, and the contact list on the right side, improving their noticeability [14], based on importance (sec 3.2.4).Self-testing by four co-authors indicated that placing the chat list on the dominant hand side (contrary to the default left side in mobile/desktop messaging apps) minimized the effort and hand movements necessary for direct mid-air interactions.[21], we employed a billboard style with a gray/dark background and white/green text to enhance legibility and distinguish UI elements visually.The background was semi-transparent (i.e., gray color) to see the environment while messaging.Font sizes ranged from 16-24 pt for comfortable legibility using the HL2's default Segoe UI font [18,19].As shown in Figure 4a, selected elements were highlighted by green or blue borders [11,21].

UI Elements. Adhering to HL2's design guidelines [19] and existing findings
Position of UI.In alignment with Microsoft's Mixed Reality guidelines [19,27], we positioned the graphical interface 0.5m ahead and 10°below the eye level to facilitate comfortable direct manipulation via mid-air gestures.Furthermore, we implemented body-locking for a more comfortable reading experience during multitasking [27,47].3) Chat Message panel: shows the chat history between the user and the selected contact.4) Input panel: contains a text entry field displaying the entered message, an input selection panel for toggling between voice dictation and virtual keyboard, and a 'send' button to transmit the entered text as a message.Note: For optimal visibility and detail, please view this content on a device with a color display.

Evaluation.
Six volunteers (1 UI designer, 2 UX researchers, 1 HCI researcher, and 2 students) were invited to identify usability issues of the initial prototype while messaging during daily tasks, such as working on a computer while sitting, casually walking, and arranging items on the table while standing.Observations with the think-aloud protocol were utilized, similar to the previous iteration.

Insight for Iteration.
Although participants found the initial GlassMessaging "interesting" and identified its potential advantages over existing mobile messaging platforms, they provided feedback for improvements on interface and input interactions, as shown in Table 1.

Final Version: GlassMessaging
The final version (see Figure 4b and 5) of GlassMessaging incorporates the feedback from the earlier iterations (e.g., Table 1).
Table 1.Problems identified in Iteration 2 and their corresponding design solutions concluded for Final Version.

Blocked Vision
Adjust UI arrangement • Limit the number of contacts and message history to around three to minimize clutter and visual interference with daily tasks, as messaging often occurs with only a few recent contacts.
• Avoid virtual keyboard blocking the chat screen while typing.
On-demand interface with alerts • Allow fully hiding the interface when daily tasks need undivided attention, only allowing notifications to avoid delaying urgent messages.
Increase transparency • To increase the visibility of the environment, increase the transparency of messages from latest to oldest (i.e., the latest messages are more opaque and visible while old messages are gradually faded).

Use non-visual feedback
• To increase visual attention on daily tasks, incorporate distinguishable auditory feedback into UI interactions and notifications.This strategy can reduce the need for visual attention on individual UI elements for feedback, such as clicking a button.

Inconvenient Interaction
Reduce interaction cost • Position the most frequently used interactive UI elements closer to the user's dominant hand to enable easier mid-air manipulation.
• Allow head-locking of the interface when the user needs to move their head for daily tasks, as body-locking requires additional head rotation to see the interface.
Voice command shortcuts • Enable shortcuts for multi-step voice commands (e.g., use 'REPLY' to represent 'OPEN NOTIFICATION' and 'VOICE MESSAGE').

Decrease learning curve
• Make ring interaction more intuitive to minimize confusion (e.g., due to hardware constraints, the ring mouse initially used the 'right' button counter-intuitively as a way to change selection 'leftward').

Errors Reduce interfering feedback
• Disable hand-mesh visualization for mid-air gestures to avoid interfering with hand-eye coordination (e.g., although hand-mesh visualization provides feedback to users on whether mid-air interaction is working, it blocks users from seeing the fingertips, which are essential for fine-grain control of certain daily tasks) Increase error tolerance • Increase the gap between interactable elements to minimize accidental triggers when using direct mid-aid manipulation

Visual Interface (Output).
The visual interface of GlassMessaging (Figure 4b) consists of four main UI panels, namely, notifications, contacts, chat messages, and voice/keyboard input panels.

Audio Interface (Input-Output).
As depicted in Figure 5, users can interact with GlassMessaging via voice commands (see Appendix B.1 for details) to navigate the UI (e.g., 'SCROLL UP', 'SCROLL TO TOP') and dictate text (using 'VOICE MESSAGE').Audio feedback (e.g., beeps) accompanies some input interactions.When the app is not in dictation mode, voice commands can directly activate various functionalities, such as opening notifications ('OPEN NOTIFICATION'), selecting contacts ('<NAME>'), sending the message ('SEND'), and hiding the interface ('HIDE CHAT').Voice shortcuts such as 'TEXT <NAME>' are also available, which combine '<NAME>' and 'VOICE MESSAGE' for direct text entry.Similarly, the 'REPLY' command opens the notification and begins dictation for a reply immediately.

Manual-input Interface (Input).
GlassMessaging supports two manual input methods: a wearable ring-mouse and mid-air hand gestures.See Appendix B.1 for details.
Ring Mouse.The user can scroll through the contact list using the ring mouse's 'up' and 'down' buttons.The 'right' button toggles between input modalities and selects the send button.The 'center' button activates the selected virtual button and serves as a long-press toggle to hide/reveal the entire interface.
Mid-air Interaction.The visual interface can be interacted with through mid-air gestures.The contact list can be scrolled by swiping, and a contact's chat can be opened by pressing their virtual icon.The input modality is chosen by selecting the corresponding virtual buttons (voice or keyboard).Pressing on a notification opens the chat with the sender.

STUDY 3: EMPIRICAL COMPARISON OF GLASSMESSAGING WITH MOBILE PHONE MESSAGING
This study evaluates GlassMessaging (Glass, sec 4) by contrasting it with the current dominant solution, mobile phone (Phone) 12 messaging applications.The focus is on multitasking and messaging behaviors in mobile and hands-busy scenarios across various everyday tasks.

Participants
The study included 16 volunteers (P1-P16, 8M, 8F), aged 19-30 (mean age=23.2,SD=3.1), from the university community.All had normal or corrected vision and no reported color or visual impairments.To minimize bias, an equal number of native and non-native English speakers with professional working proficiency were included.Only 5 had previous experience (less than 2 hours) with OHMDs.All had used phones (10 Android, 6 iOS) for messaging for over six years, with Telegram and WhatsApp as the most utilized apps.All participants used typing on phones, and four occasionally used voice-to-text.Participants received USD14 compensation, and none had partaken in prior studies.

Apparatus
Participants employed GlassMessaging (final version) discussed in study 2 (sec 4.3) for mobile messaging.For the mobile phone context, all participants used the Telegram mobile app 13 on a Google Pixel 6 Pro (Android) 14 phone to maintain consistency and avoid privacy concerns.We developed a Telegram Bot API 15 that autonomously sent and received messages, with a basic UI panel located above the main app for instructions (Figure 6A).The app delivered haptic and audio feedback to notify users about new messages.
To simulate a realistic experience, a scrolling contact list of 16 contacts with common English names was used for both platforms [6].
Both platforms were linked to the tablet computer running a Python program to display instructions, record participants' inputs, and compute messaging timing and accuracy.Moreover, participants' multitasking behavior and the screens of both platforms (Figure 6) were video-recorded.

Tasks
To evaluate the comparison between GlassMessaging (Glass) and mobile phones (Phone), a multitasking simulation involving multiple daily life activities (study 1, sec 3.1.1)was devised.

Primary
Task: Walk to Eat Snacks in the Kitchen.The primary task involved a series of activities like walking, preparing food/drinks, washing dishes, and eating, representing mobility and hands/eyes/mouth busy situations (average duration: 8.1 min (SD=1.5),see Appendix C.1-Figure 12).This approach helped to comprehend the impact of realistic tasks on messaging behaviors using two messaging platforms.

Secondary Task: Messaging.
Task Focus.A messaging task (Figure 6) was designed for participants to perform concurrently with the primary task.The focus was on replying to messages, the most common and comprehensive behavior, which involves contact selection and message viewing (see sec 3.2.4,Figure 10, [6]).Message Content.In line with previous studies [66], the messaging task asked participants to re-type a preset message displayed in the instruction panel rather than drafting a reply independently.This design controls text length and minimizes the effects of confounding factors like thinking time and word choice.Participants were instructed to enter the text shown in the instruction panel and then send it as a message via typing or voice dictation.
These preset messages were drawn from a standard text entry phase set for mobile contexts, collected by Vertanen et al. [76].Specifically, a set of Long messages with an average length of 13.9 (SD=1.6)words ( = 71.4, = 3.2 characters) and Short messages with an average length of 6.6 (SD=1.2) words ( = 33.4, = 3.2 characters), totaling approximately 100 messages, was chosen based on previous findings 16 [53,74].
Stimuli.Participants received a maximum of 10 instructions per session to simulate a realistic chat conversation.Each instruction contained a message the participant had to send to a contact, both selected randomly without repetitions.A minimum gap of 45 seconds was maintained between each instruction [4,37,55].For example, if the instruction was "[John] Did you get this?", the participant had to send the message "Did you get this?" to "John".The current instruction was displayed continuously to facilitate easier typing and to eliminate the potential confounding factor of working memory capacity.New instructions were given one at a time and only after the previous instruction was addressed.The chat history was reset to a few initial messages before each session.

Study Design
The study was conducted using a within-subject, repeated-measures design.All the independent variables and measures are listed in Figure 7 and Table 2. Specifically, two independent variables, Platform (Glass vs. Phone) and Message Length (Short vs. Long), generated four conditions.A baseline condition, no-messaging (i.e., participants only performed the primary task), was included for comparison.The Platform variable was counterbalanced, and Message Length was administered in ascending lengths.[3] where text entry time is calculated from the start of entering message to sending the message (Figure 7); • Message Response Time (seconds) = Average time gap between receiving and replying (Figure 7) • Texting Accuracy (%) = 1 −    (), where MSD [3,70] means lowercase Levenshtein Distance between replied message and received message.
• Perceived Messaging Effectiveness (1-7) = "It was effective to use the selected platform for texting while engaged with the primary task" • User Experience Questionnaire -Short (UEQ-S measures pragmatic and hedonic qualities using 8-item scale where each item is transformed to -3 to +3) [24,40]

Procedure
After obtaining consent, we conducted a 30-40 minute training session to familiarize the participant with the apparatus, primary, and secondary tasks.Participants were explicitly informed they could utilize existing typing/correction aids such as auto-correct, auto-complete, swipe typing, and voice input on the mobile app.Nevertheless, they were not briefed about the different text lengths before or during the study sessions.Once acclimated to the apparatus, they first experienced a no-messaging condition followed by four messaging conditions on two Platforms.Post each condition; participants completed questionnaires detailing their multitasking and messaging experiences with each Platform before taking a break.After the experiment, they filled out a subjective preference questionnaire and were interviewed for approximately 10-15 minutes.The whole experiment, including training, lasted for approximately 2 hours.
Participants were instructed to carry out primary tasks as naturally as possible and maintain consistency in food/drink preparation across conditions (e.g., snack quantity, water-and heat-levels of drinks).For the messaging task, participants were free to select their most comfortable modality (e.g., voice or keyboard input, ring mouse or mid-air gestures), as the study aimed to compare the natural usage of the two platforms rather than the different modalities.Furthermore, they were guided to reply at their convenience, without the necessity for immediate responses.

Quantitative Results
Table 3 displays the participants' input interaction methods for each Platform.In most scenarios, participants utilized a blend of methods, particularly for Glass.Figures 8-9 present the mean performance of participants in terms of quantitative measures (see Appendix C.3-C.2 for detailed data and analysis).There were no significant differences between native and non-native speakers concerning the speed or accuracy of text entry.  2 for measurement calculation details.The X-axis represents the Message Length, and the error bars represent the standard error.See Appendix C.3 for more details.
Participants leaning towards Glass highlighted the advantages of voice input convenience, hands-free operation, and modality switching based on situational needs.On the other hand, Phone enthusiasts pointed out familiarity, easier message editing, and limitations of GlassMessaging (sec 5.7.3) as determining factors.Participants perceived short message entry to be more accurate using voice on Glass, but voice correction of long messages was deemed more "demanding" than typing on Phone, influencing their preference.

Qualitative Results
Overall, participants had higher texting speed and quicker response times with Glass than Phone due to its hands-free nature, convenient access to voice inputs, and multimodal input support.However, Glass's lack of text editing support, occasional voice recognition errors, and platform unfamiliarity resulted in decreased accuracy and higher cognitive load.

5.7.1
GlassMessaging Promotes Quicker Responses by Providing Increased Messaging Opportunities.The handsfree, wearable, and multimodal aspects of GlassMessaging facilitated enhanced messaging opportunities and expedited responses compared to Phone (sec 5.6: Message Count and Message Response Time).
• Hands-free operation.GlassMessaging allowed hands-free UI navigation and dictation activation, while phone users had to manipulate the chat interface using their hands after accomplishing primary tasks.Specifically, Glass enabled messaging in situations where hands were unoccupied but not accessible for phone interaction, such as when they were wet, unclean, or unhygienic.With its voice commands and mid-air gestures, the wearable nature of Glass made the messaging application more easily accessible than the phone, eliminating the need to find a place to set it when hands were required for primary tasks.• Wearable, always-on nature.The expedited response time of GlassMessaging was also a product of the prompt reception of incoming messages, which enabled quicker replies while multitasking.This was in contrast with Phone users who had to conclude primary tasks before accessing their phones to check messages.P5 mentioned that "Smart glasses come in, so we can easily read messages and reply" during tasks like dishwashing, where holding a phone is inconvenient.Most participants (11/16) found GlassMessaging to be more convenient for staying connected and promptly processing information, thus accelerating the reading and response times for urgent messages.• Multimodal input.GlassMessaging enabled improved modality coordination for multitasking between messaging and primary tasks.During activities like walking or stair climbing, users could utilize voice or the ring mouse to navigate the UI and input text, keeping their focus on the environment and the app.Likewise, while eating, participants could view messages with their eyes and respond using mid-air gestures or ring interaction without the need to look down at a phone.Interestingly, participants reported that this approach made messaging safer and more efficient as it negated the necessity to divert their visual and motor attention away from primary tasks."I don't need to look down to see the message [compared to Phone], which is safer while walking and going upstairs (P1)" "[while eating] I have to shift my visual and hands attention to the Phone.With the Glasses, I don't have to do that.I can continue eating, and as long as I can speak and the Glasses can comprehend me, I can message while eating concurrently.(P9)" 5.7.2 GlassMessaging Enhances Input Efficiency During Multitasking.The hands-free voice input of Glass facilitated seamless interface navigation and efficient text entry during multitasking with GlassMessaging (sec 5.6: Texting Speed, Perceived Messaging Effectiveness).
• Navigation.Compared to phones, where users must search or scroll to locate a contact before they can compose a message, voice input with Glass reduced interaction steps, allowing users to select a chat and initiate dictation with a single command.• Text entry.All participants in GlassMessaging utilized dictation, while only two did so for Phone (Table 3).
As expected, in Glass, participants favored hands-free voice input to minimize interference with primary tasks requiring hands, "glass-based interaction is more convenient because voice input is generally faster than touch input for texting.Plus, I don't need to touch the [phone] screen while eating and texting (P10)".Most participants found dictation in Glass quicker than typing on Phone, particularly for long messages, and perceived it as less effortful.As anticipated in Phone, the majority (15/16) were used to typing as a habit, finding it easier to edit and generally more accurate, although "typing out long sentences can be a little tedious (P14)".One participant occasionally used voice dictation while eating, while another frequently used it because "voice is easier and faster [than typing] (P3)".In summary, the holdable nature of Phone and its superior accuracy facilitated typing, but it reduced input efficiency during multitasking, especially for lengthy messages.

5.7.3
GlassMessaging Limitations.Despite the potential benefits of GlassMessaging in multitasking scenarios, further enhancements are required to address its limitations.
• Tradeoff: Efficiency vs. Accuracy.While hands-free interaction and voice input improved texting speed and decreased response time, lower texting accuracy was a concern for participants, especially for long messages; "sending a wrong message can be worse than not sending anything (P9)".The accuracy worsened in unsuitable environments (e.g., speaking while chewing food) or with accents different from native speakers.In contrast, Phone has obvious advantages in accuracy, as auto-corrections and selective editing simplify error correction."I can edit the problematic word [by typing on Phone] instead of re-speaking the whole sentence [on Glass].But the typing act [on mid-air virtual keyboard] is obviously a no-go for the Glasses.(P4)" Yet, some participants still preferred the higher efficiency, tolerating the slight tradeoff in accuracy; "I sacrificed a little bit of accuracy to achieve efficiency.(P2)" • Tradeoff: Efficiency vs. Cognitive load.Participants reported increased cognitive load when using Glass due to learning new technology, handling input errors, and overcoming technical obstacles.Learning GlassMessaging involved getting used to mid-air gestures, remembering voice commands, and adapting to OHMDs.Participants also had to deal with input errors such as repeating voice commands to correct text and accidental mid-air gesture triggers, "I looked down at my food, and the glass would misclassify my eating hand as some gesture (P1)".However, participants found it easier to use GlassMessaging with practice.• Technical challenges of current OHMD development.Participants found that content stabilization on the OHMD caused shaking during mobility tasks like walking or climbing stairs, making text reading difficult.This could be mitigated by combining multiple content stabilization methods based on mobility and primary task (e.g., body-locking while walking, head-locking while stationary).Furthermore, the accuracy of gesture detection decreased in mobile scenarios, and participants occasionally had to repeat voice commands, indicating the need for personalized customization of motion and dictation parameters (e.g., silence gap, dictation duration).As expected, the weight and size of the OHMD caused discomfort and obstructed movements, and external brightness reduced the interface's readability, "device takes up a lot of space on my head, and I need to tilt my head up more while drinking (P13)".These issues negatively impacted the evaluation measures of RTLX and Preference for GlassMessaging.

DISCUSSION
We identified mobile texting issues during daily activities (Study 1) and developed an OHMD app, GlassMessaging, to address some of them (Study 2).Comparing GlassMessaging to mobile phone texting showed how it could mitigate hands-and eyes-busy problems that occur during multitasking (Study 3) and proved its viability in everyday situations.

Is OHMD Better than Phone for Messaging?
With that said, we do not claim that our solution can completely replace current platforms like phones and smartwatches for multitasking with messaging.The phone is widely accepted, provides higher accuracy, and is applicable in almost all scenarios, while the potential of GlassMessaging mainly relies on its increased access to voice input and seamless hand interaction, which may not be suitable in all scenarios.Therefore, we view OHMDs as a promising complementary solution to the phone for multitasking with messaging, enabling heads-up usage [84] to reduce the "smartphone zombie" phenomenon [2].
6.2 Why is Voice Input More Natural on OHMDs than on Mobile Phones?
Our study shows that users prefer voice input with OHMDs but use virtual keyboards on phones.This is due to inconvenient mid-air keyboards on OHMDs [1,80] and the affordances unique to each platform [54,75,86,87] (see Table 4).[78].When specific interactions are limited, switching to an alternative modality supported by the platform can overcome these limitations.For example, phone typing becomes impractical during hand-engaging tasks.Shifting to hands-free interactions can help, but mobile apps often require additional hand-based steps (e.g., opening the chat, pressing a button), limiting phone voice usage.In contrast, OHMDs enable seamless input modality switching to overcome such situational disabilities (sec 5.7.1).

6.2.4
Higher Perceived Trust of OHMDs.Some participants reported higher trust in OHMDs, enhancing their comfort with voicing messages.This trust stems from the device's exclusivity, ensuring that only the user views the content.The wearable nature of OHMDs also makes them less shareable compared to phones, enhancing perceived trustworthiness.

Comparison with Google Glass XE
Google Glass XE (2013-2017) [29,30,77], a discontinued product, supported heads-up messaging.Here, we distinguish between our application and Google Glass XE, showcasing our contributions from both practical and academic perspectives.
6.3.1 Google Glass XE (GG) Interface.GG incorporated a default set of voice action commands for messaging (Appendix D-Figure 15) [30].Its lightweight and seamless design combined voice, head gestures, and touch gestures for inputs and an OHMD for output.To activate voice commands or send messages, users would utter "OK Glass" and "Send a message to", followed by the contact's name and message content.Users would respond to a message by saying "Reply" followed by their message content.Hence, GG provided an efficient method for sending and replying to individual messages.
6.3.2Comparison.Table 5 compares GlassMessaging (GM) and GG, illustrating that both employ voice input for text entry and navigation.Our evaluation shows voice input as an efficient messaging method on OHMDs, indicated by quicker text entry and reduced response times, validating GG's design choice.Speech recognition challenges do affect GM's accuracy, which GG users likely also experienced 18 .Additionally, GG likely faced inconveniences editing messages, requiring input method switches for text correction, such as canceling incorrect messages using touchpad swipes and re-dictating the correct message.Below, we delve into their differences in messaging task handling, display mechanism, and interaction design.
One-off Messaging vs. Context-maintained Messaging.GG's voice commands catered to users' immediate messaging needs but struggled to facilitate multiple interleaved messages or reply to older messages.Hence, while it met basic messaging needs, GG fell short in handling complex messaging [6,13,31,32] scenarios.Effective communication often necessitates retaining contextual information, such as message sequence, frequency, and priority [15,58].Modern messaging apps have more comprehensive features (e.g., full chat history, easy contact switching, unread indicators, etc.), a standard that GM follows as well.
Line-of-sight vs. Above Line-of-sight.The message display location is another key difference between GG and GM.GG positioned content above the line-of-sight (LoS) to maintain situational awareness, but this demanded user effort in switching away from the natural angle of forward vision.As higher-resolution displays with larger FoVs are now available, recent studies propose positioning some content within the LoS while leaving sufficient gaps to maintain environmental awareness [63,85].GM thus displays certain content within the LoS and uses gradual opacity adjustments to ensure situational awareness.
Interaction Support.In comparison to GG, GM offers a unique set of interaction possibilities for messaging while multitasking (sec 5.7.1-5.7.2).GG incorporated head-gesture functionality as a convenient way to initiate and respond to real-time messaging needs.Conversely, the integration of ring and mid-air interactions in GM enables users to alternate between input modalities based on situational constraints (sec 6.2.3).
In conclusion, both GG and GM facilitate heads-up messaging for daily usage, with different affordances and design emphases.GG, designed for an earlier generation of smart glasses with lower resolution and a smaller FoV, focused on convenient real-time one-off messaging needs.GM takes advantage of advanced OHMDs with higher resolution and larger FoVs and can handle immediate or delayed messaging needs involving multiple messages.Good synergies exist between the two designs, and combining them could offer a better messaging experience for users.For instance, GM could incorporate GG's approach for handling one-off messaging needs by integrating head gestures for real-time messaging needs.Thus, the current GM design is more suitable for more complex messaging needs.

Can OHMD Improve People's Social Connections?
Unlike phones, the hands-free nature and wearable near-eye display of OHMDs permit timely message viewing.In our observational study, one participant (P10) checks messages promptly when eating alone, maintaining a connection with others.Yet, due to social etiquette, they tend not to do so when dining with others.Their message viewing and responding behaviors diverge: they view unread messages immediately but reply at their discretion.Therefore, the wearable and hands-free capability of OHMDs may enhance users' social connections by facilitating swift message viewing.

Design Implications for Messaging on OHMDs
Besides the explored opportunities and constraints, we found more opportunities for GlassMessaging in the multitasking with messaging area, including text editing and multimedia support.6.5.1 Supporting Text Editing.Voice input presents a trade-off between speed and accuracy.However, adding more text editing functions to user dictations could improve accuracy.For example, by incorporating EYEditor [28], a text editing system designed for OHMDs in on-the-go scenarios, GlassMessaging could achieve a similar accuracy level as phone messaging apps.Alternatively, AI capabilities, such as large language models, could predict text, correct writing errors, and enhance content [82].
6.5.2Supporting Input Modalities Switching.Although GlassMessaging shows promise through easier voice input, it is not always suitable.Moreover, some input methods might lose precision due to situational impairments [78], like misrecognized hand gestures.In these cases, users should transition smoothly to another input mode.In public settings, users could switch from voice to socially acceptable alternatives such as ring interaction or finger taps (e.g., [83]).6.5.3 Identifying Optimal Output Methods.GlassMessaging mainly uses text for messages, supplemented with auditory feedback for notifications and user actions.Participants appreciated the visual display for facilitating quick error identification, faster reading, and more efficient chat history scanning than temporal voice messages.Furthermore, two participants stated that the near-eye display enhances content privacy compared to phones, which bystanders can view.However, availability concerns arose for high-attention and eyes-busy scenarios.An alternative could be voice output, for example, text-to-speech or retaining the original audio alongside the text version.This solution could also address the accuracy issue by enabling text and audio version comparison if text inaccuracies impact readability.Further studies are needed to determine the most effective output and information display methods.

LIMITATIONS AND FUTURE WORK 7.1 Social Acceptability
Our study 3 was conducted in private settings, bringing into question the applicability of GlassMessaging in public and social contexts.Participants expressed worries about voice input's potential awkwardness in public, disturbance to bystanders, privacy issues, and unexpected input/triggers in noisy environments.Moreover, voice input may not be suitable in quiet zones like libraries.Additional studies are required to comprehend the convenience and affordances of voice input on Glass in social settings.
Multiple text entry methods, including non-voice-based text entry [34,50,80,83], can address this concern.These methods could allow for input method transitioning.Non-intrusive, concealable gestures [41] or interaction techniques [62] could be utilized for non-voice input.Alternatively, using GlassMessaging to view messages could complement phones, paralleling current smartwatch practices.

Long Time Usage
We examined messaging with tech-savvy participants for short periods in limited realistic scenarios.The novelty of GlassMessaging and the discrepancy in daily Glass and Phone usage could have influenced our findings.Moreover, our findings may not extend to other groups, like older adults, and do not consider potential long-term effects.Thus, more research involving diverse OHMD prototypes, user populations, and longitudinal field studies is warranted.

CONCLUSION
Multitasking with messaging is a common real-life occurrence, yet current mobile applications and platforms offer inadequate support.In this study, we initially analyzed people's messaging behaviors through a survey and observation study.We identified two primary situational impairments caused by existing mobile platforms (hands-busy and eyes-busy), leading us to iteratively develop GlassMessaging (https://github.com/NUS-HCILab/GlassMessaging), a messaging application designed for OHMDs to mitigate these limitations.A comparison with a phone-based messaging application in a realistic multitasking situation verified that GlassMessaging allows more messaging opportunities, faster responses, and increased texting speed due to its hands-free, wearable nature, and coordinated multimodal input.Consequently, we discussed the affordances of both devices and potential design implications.We anticipate that messaging on OHMD will represent the next communication frontier and serve as a beneficial complement to mobile phones during multitasking with messaging, propelled by technological advancements.

Fig. 2 .
Fig. 2. Users' self-reported messaging frequencies during various daily activities.The data indicate that the two most common scenarios for daily messaging in conjunction with multitasking are On Transportation and Eating Alone.The Y-axis represents various activities, while the X-axis signifies a 7-point Likert score, with 1 representing "Do not message at all" and 7 being "Do most messaging".The error bar represents the 95% confidence interval.

2 Fig. 3 .
Fig. 3. Interfaces of the Telegram messaging app on four OHMD devices (Vuzix Blaze, Epson BT-300, Nreal Light, HoloLens 2) used in Iteration 1, each presented in both Light and Dark Display Modes.Note: For optimal visibility and detail, please view this content on a device with a color display.
(a) Initial interface design of GlassMessaging.(b) Final interface of GlassMessaging.

Fig. 4 .
Fig. 4. (a) Initial interface includes a notification bar on top, a chat message box in the middle, a contact list on the right with numbers indicating unread messages, and a texting area at the bottom.(b) Final interface features a Notification Panel, Contact List Panel, Chat Messages Panel, and Input Panel (Voice/Keyboard, Send). 1) Notification panel: temporarily displays new message notifications upon arrival.2) Contact list: shows displays all contacts available for communication.Selecting a contact (through voice or manual input) opens the respective chat conversation in the Chat Message panel.3)Chat Message panel: shows the chat history between the user and the selected contact.4) Input panel: contains a text entry field displaying the entered message, an input selection panel for toggling between voice dictation and virtual keyboard, and a 'send' button to transmit the entered text as a message.Note: For optimal visibility and detail, please view this content on a device with a color display.

Fig. 5 .
Fig. 5. Steps for sending a message after receiving a notification.The user wears the OHMD and a ring mouse and sees the environment.The user (1) says 'SHOW CHAT', the interface is displayed, and a notification including the name of the sender and the sending time appears at the top of the view with a beep sound; (2) says the name of the contact (e.g., 'PETER'), and the system automatically navigates to the chat interface of the respective contact; (3) says 'VOICE MESSAGE' and dictates the message via voice.The system transcribes the user's utterances to text in real-time, displayed in the text entry box.Once the user stops speaking for a measured amount of time (silence gap), dictation turns off automatically; (4) says 'SEND', and the system sends the message; (5) says 'HIDE CHAT', and the full interface is hidden, restoring the full vision of the environment.

Fig. 6 .
Fig. 6.Apparatus including the Instruction Panel.(A) Telegram app with the Instruction Panel; (B) A participant uses Phone to perform messaging while eating (top) and cleaning (bottom); (C) GlassMessaging interface; (D) A participant uses GlassMessaging to perform messaging task while eating (top) and cleaning (bottom).Note: For optimal visibility and detail, please view this content on a device with a color display.

Fig. 8 .
Fig.8.Measures related to time and speed of primary and secondary tasks with 16 participants.Refer to Table2for measurement calculation details.The X-axis represents the Message Length, and the error bars represent the standard error.See Appendix C.3 for more details.

Fig. 9 .
Fig. 9. Measures related to accuracy and perception of messaging task with 16 participants.Refer to Table 2 for measurement calculation details.The X-axis represents the Message Length, and the error bars represent the standard error.See Appendix C.3 for more details.

Fig. 12 .
Fig. 12.Details of the primary task.The primary task required participants to walk back and forth for a total distance of around 43×2.This included walking from the starting position to a circular stair (≈ 25), climbing up a one-floor circular stair (≈ 4 height), walking (≈ 14m) to a kitchen (≈ 6×3 2 ), performing washing and eating activities, and finally walking back to the starting position.Table 7.Average performance ('mean (sd)') in study 3 with 16 participants.The first column represents the Platform-Message Length combination using the first letters of each (G = Glass, P = Phone; S = Short, L = Long).

Fig. 13 .
Fig. 13.NASA-TLX scores for the four conditions (Glass-Short, Glass-Long, Phone-Short, and Phone-Long) in study 3 (N=16).Error bars represent errors.Note that the Long sessions were administered after Short sessions both platforms.

Table 3 .
Input interaction methods for messaging employed by participants during during Study 3 (N=16).The count of participants is indicated in parentheses.

Table 4 .
[44,54,75,86,87]ferent affordances supported by OHMDs and Phones[44,54,75,86,87]. Note: This list is not exhaustive.Wearable Nature of OHMD.The wearable nature of OHMDs may explain this preference for voice input.Its physical proximity allows effortless capture of verbal commands anytime, anywhere.The see-through near-eye display permits message viewing without impeding hands or platform access.In contrast, commonly used phone texting apps often require extra steps for voice commands, and their distant displays necessitate closer interaction, limiting their voice input support.Hence, unlike mobile devices, the proximity of OHMDs to the head as wearable devices naturally favors voice input.6.2.2 KeyboardUse is Natural on Mobile Phones.Conversely, users favor phone keyboards, particularly when stationary.Touchscreen keyboards are intuitive, accurate, and provide reliable tactile feedback, an aspect that GlassMessaging's virtual keyboard currently lacks due to OS limitations.Furthermore, keyboard usage also offers more privacy as voicing messages is not required.6.2.3 Input Modality Switching on OHMDs.Multitasking restrictions can lead to situational disabilities and impairments

Table 5 .
[29,57,77]on between GlassMessaging and Google Glass XE.LoS stands for Line of Sight, and FoV represents Field of View.Note: This list is not exhaustive and based on public online resources[29,57,77], as Google Glass XE has been discontinued since 2017.