Everyday Life Challenges and Augmented Realities: Exploring Use Cases For, and User Perspectives on, an Augmented Everyday Life

Contextually aware visual, auditory, and haptic interfaces can augment and empower people in their everyday life. However, little is known about the use cases and users’ perspectives on a pervasive augmented reality that augments places and humans. In this paper, we contribute a first step towards an augmented societal future by outlining promising example use cases for assistive mixed reality interfaces (i.e., AssistiveMR). By surveying 60 participants, we found that an augmented reality has the potential to find wide-spread application in a plethora of scenarios, including supporting people in recalling information, disconnecting from reality, and augmenting communication with others using visual augmentations. However, participants expressed concerns regarding the potential high costs associated with errors and raised questions about the social acceptability of augmentations of humans and real-world surroundings. Our exploration of promising use cases for assistive MR augmentations aims at serving as inspiration, motivating researchers to augment, support, and empower individuals in their daily activities.


INTRODUCTION
Humanity is on the brink of experiencing physical, cognitive, and perceptual augmentations through digital technologies, transcending spatial and temporal boundaries.The past advances in human augmentations have laid the path for an augmented reality that enhances human capabilities beyond the individual and will create societal impact.Back in 1962, Douglas Engelbart envisioned technologies that amplify human body and mind [7].Since then, the Augmented Humans (AH) and the Virtual/Augmented/Mixed Reality communities have contributed a plethora of artefacts to the overarching vision of an augmented reality that supports and empowers individuals.For example, works in the broader augmented human field have shown how technology can augment an individual's body and reality, including the integration of supernumary robotic limbs [1], e.g., a sixth finger [38], contributing towards spatial directional guidance using cheek haptic stimulation in a virtual environment [28], and augmenting a human's social identity using AR [2].However, there exists a fundamental gap in understanding the problem space that individuals navigate in their daily lives, as well as the potential intersections where augmented realities and human intervention can collaborate to help individuals overcome their daily challenges.In this paper, we introduced 60 participants to the idea of an everyday augmented reality using the example of assistive mixed reality glasses [2,19] in the realm of a pervasive augmented reality [10], to which we refer as AssistiveMR in the remainder of this paper.We first surveyed existing everyday life challenges to then explore how (and where) an augmented reality can support and empower individuals in their everyday life.Our survey shows that individuals face challenges around the comprehension of real-world information, struggle with concentration due to real-world noise, face difficulties in recalling information, lack of accurate time management and anticipating near-future experiences, and face barriers when communicating with others.Based on these challenges, we outline opportunities for an augmented reality and tie those back to existing works in the broader augmented human domain.For example, participants voiced that augmentations can support them in their social communication by using modality changes (e.g., from visual information → tactile information), assist them in recalling information, and help them in disconnecting from reality during focus time.
Contribution Statement: By utilizing a survey to gather insights into the current everyday life challenges faced by individuals, and complementing this with a synthesis of augmented realities applied to these challenges, we inform researchers and practitioners to conceptualise, design, and develop augmentations that support and empower individuals in their day-to-day tasks, and contribute towards an augmented reality that generates positive societal impact.

RELATED WORK
Our work draws inspiration from prior advancements in human augmentations.It is driven by the rapidly approaching era of a pervasive augmented reality [10].We first review works on human augmentations and then discuss works around the concept of a pervasive augmented reality.

Augmenting Humans
The concept of augmenting humans aims to furnish them with capabilities that frequently surpass those inherent to individual humans.It seeks to enhance human potential by providing additional affordances beyond natural capabilities.In 2022, Inami et al. [13] argued that "being in a JIZAI state, where one owns a JIZAI Body, is a preferred state to achieve when realizing human augmentation".In their work, "JIZAI Body" allows individuals to live the way they wish to live in society by incorporating five key aspects around Supersenory, Supersomatory, Possessory/Transformatory, Duplicatory, and Fusionary, all of which augment humans on different abstraction levels.For example, the Supersomatory aspect of a JIZAI body extends an individual's abilities and functions beyond the constraints of a natural human body using human-centred robotics (e.g., a sixth finger [38] or MetaLimbs [35], which map the motion of a user's feet to two robotic arms).There are many more works in the broader augmented humans field that explored physical, cognitive, and perceptual augmentations of humans through digital technologies.For example, Knierim et al. [16] explored how augmented reality can be used to alter the speed in which individuals perceive their real-world surroundings.Using a proof-of-concept implementation, they showed how AR can overcome the temporal limitations of human visual perception in real-time and as a natural extension of the human sense.Others, such as Watanabe et al. [40], contributed a prototype that acquires auditory real-world information that is not heard by individual's normal auditory perception and eliminates the unwanted auditory information according to an individual's preferences.
With the advancements of technologies that are capable of amplifying and augmenting human perception in everyday life scenarios, the AH and the Human-Computer Interaction (HCI) communities have begun to discuss the societal implications of an era in which computational and human systems are closely intertwined, i.e., Human-Computer Integration (HInt) [8,27].As underscored by Mueller et al. [27], realizing the full potential of a future enriched by human augmentations and reality augmentations necessitates a thorough understanding of the societal implications that result from integrating these technologies into our daily lives.As mentioned earlier, the present state of augmented human research foresees the emergence and integration of human augmentations, whether Supersensory or Supersomatory [13], into our daily experiences.However, a crucial next phase involves a) assessing the current challenges in everyday life, and b) investigating the optimal applications of augmentations, encompassing robotics and immersive technologies like virtual/augmented/mixed realities, to foster an assistive and augmented society as a whole.

Pervasive Augmented Reality
Future everyday interfaces will be pervasive and omnipresent, enabling individuals to continuously augment, alter or diminish their everyday experiences [11,25,36].Following the footsteps of smartphones, augmented reality technologies, be they visual, auditory, or tactile, will supply users with information at any time and at any place.For example, Davari et al. [5] investigated context-aware AR interfaces that minimise intrusiveness whilst providing fast and easy visual access of information during social contexts.Langlotz et al. [17] proposed a mobile application that augments the user's real-world surroundings with relevant information anytime and anywhere.Lu and Bowman [18] investigated in-situ AR interfaces to support cooking through augmented recipe and timer widgets.Others envisioned a future augmented reality where AR glasses are used for authentication in public [20,41] or TV-watching experiences [21,39].Google Research introduced opportunistic interfaces (i.e., an extension of the concept of opportunistic controls where a semantic matching of virtual content is associated with physical objects [12]) to grant individuals complete freedom to summon augmented interfaces on everyday objects via voice commands or tapping gestures [6].The idea of a pervasive augmented reality has also motivated researchers to explore use cases to support blind or low-vision people's capabilities in social situations [26,43] or help deaf and hard-of-hearing people to better engage in social communications [24].While there is evidenced potential for a pervasive augmented reality to contribute positive societal impact, O'Hagan et al. [31] emphasized the importance of not only acknowledging the positive aspects but also recognizing the challenges introduced by such augmentations.For example, in an era where humans and real-world objects can be substituted through virtual artefacts in real-time [15], which types of substitutions involving humans and real-world objects should be permissible [31]?Moreover, if such substitutions become the social norm, under what circumstances and how should modifications and augmentations of reality become an integral part of our everyday experiences?
By merging the realms of augmenting humans, which typically emphasizes enhancing individual capabilities, with pervasive augmented realities, we argue that it is imperative to chart a roadmap that features illustrative use cases that are applied to prevalent everyday challenges, while also undertaking a critical examination of the potential negative consequences of human augmentations [2,31].As part of this journey, our objective is to provide an overview of commonplace challenges and subsequently describe how a specific type of human augmentation, such as the use of eye-wear, can contribute to an augmented and supportive everyday life.

METHODOLOGY
To explore promising use cases for augmented humans and realities, we examined visual, auditory, and haptic augmentations.To achieve this, we introduced participants to assistive mixed reality artefacts, i.e., AssistiveMR, as follows: • mixed reality: a medium consisting of immersive computergenerated environments in which elements of a physical and virtual environment are combined.
• artefact: stands for an object made and shaped by people, usually for a specific purpose or use.• AssistiveMR: an experience that augments, alters or diminishes a person's perception of reality and aims at assisting them in their day-to-day tasks.
To illustrate the idea of an augmented human, we used fictional personas and one example use case for each modality: for visual, auditory, and tactile augmentations (three in total).While we do not claim that visual, auditory, and tactile augmentations cover all different types of human augmentations and reality augmentations (e.g., smell and taste augmentations [3,29]), exploring the fundamental modalities, those which are feasible to implement in everyday life as of now, provides a strong initial foundation for further research to build upon.In the following, we describe one example, namely Augmented Social Communication, see Figure 1 for a visualisation of the example.The full set of examples used in the survey is available in the supplementary material for reproducibility.

Example Scenario: Augmented Social Communication
Anna (31, female) is a fictional interaction designer.She is deaf and uses the British Sign Language (BSL) to communicate with her colleagues at work.Anna's goal is to continue growing her skill set.Anna wants to promote to a Senior Interaction Designer role.She is eager to learn new design techniques everyday from her co-worker, Tom.Tom is happy to exchange ideas and design techniques.However, both face communication difficulties, making the exchange and learning-from-each-other process time-consuming.An augmented British sign language miniature, see Figure 1, supports Anna in her communication with Tom through a hologram of a visual assistance that transforms speech into BSL in real-time.This example scenario, along with the examples for auditory augmentations and tactile augmentations in our supplementary material, see Figure 3 in Appendix A, were used in the survey to contribute to a common understanding of what we mean with augmenting humans and realities using assistive mixed reality.

Survey Design and Task
Our survey first asked for informed consent and the participant's demographics, such as age and gender.Participants reported their everyday life challenges and were then introduced to AssistiveMR, along with the examples for visual, auditory, and tactile augmentations.We added a step-by-step description of the figures as alternative text for each scenario.After each example scenario, participants had to report their level of comprehension.We added such comprehension checks to ensure the participants understood the concept of AssistiveMR and had a rough understanding of augmenting humans and realities in everyday life scenarios.They then reported a maximum of three scenarios where they see potential of using variations of AssistiveMR in their everyday life.To support their brainwriting activity, we provided the following template for each scenario: • Challenge: What is the challenge in this scenario?
• Location: Where is this scenario taking place?
• Existing Assistance: How do you currently overcome the challenge, including technology and/or human assistance?• Opportunity for AssistiveMR: What should the assistive mixed reality artefact be able to do?How does the artefact help you to overcome the challenge(s) and contribute to your goal(s)?To conclude the survey, we asked for insights into potential disadvantages and unsuitable scenarios for AssistiveMR in everyday life.This allowed us to cover both promising use cases as well as the potential dark side of human augmentations.

Recruitment and Participants
We recruited 60 participants for this survey.For the recruitment, we made use of Prolific, a popular platform for participant recruitment in the context of scientific works (e.g., [4,37]).We recruited three gender-balanced samples and used the following pre-screening methods on Prolific: a) no pre-screening: random sampling (20 participants), b) vision pre-screening: a sample who indicated to not have normal or corrected-to-normal vision (20 participants), and c) hearing pre-screening: a sample who reported to have hearing difficulties (20 participants).Using Prolific allowed us to reach a diverse Anna's mixed reality glasses convert Tom's speech into British Sign Language (BSL) using an augmented miniature person.
Tom talks A microphone embedded in Anna's glasses records the conversation.
Anna's glasses transform speech into a BSL interpreter.

Digital Copy of Audio
Transformation of Audio into British Sign Language BSL interpreter spatially embedded into real-world environment.
Step 1 Step 2 Step 3 set of participants from various countries (i.e., from 15 countries, including South Africa, United Kingdom, Portugal, Poland, and many more), thereby increased the diversity of our sample when surveying existing everyday life challenges and collecting opinions and thoughts on augmenting humans and reality.However, some of the results might stem from a student's context as 19 out of 60 participants (less than a third) indicated to be enrolled as a student according to Prolific demographic data.
Our comprehension checks after each scenario asked the participants if they understood the concept of AssistiveMR.After checking the data, we had to exclude the data from two participants (3.33%) as they clearly indicated to not have understood the examples or spent no effort in filling out the survey.For the reported sample, we had a Median of 5 on a 5-point Likert Scale for their comprehension of each example.
Our final sample was on average 30.36 years (SD=9.52;min=20; max=61) with a balanced gender between male and female (27 male, 29 female, and two non-binary).Participant were compensated for their participation with £6/hour.Filling in the survey took between 30 and 45 minutes.The study went through an Ethics checklist and is part of a larger research effort that received ethical approval at the University of St. Gallen.

Data Analysis
We applied affinity diagramming [34] on a digital whiteboard (i.e., Miro) to cluster participants' responses and create overarching themes.This process was done for the challenge scenarios, the locations, and the opportunities for AssistiveMR.We then mapped participants' responses regarding their current use of assistance to the challenge scenarios and clustered them.This process resulted in five key challenges that we report in detail in the paper: (1) Comprehension of Real-World Information, (2) Staying Focused in Daily Life, (3) Recalling Information, (4) Time Management and Preview, and (5) Difficulties in Communication.For these challenges, the participants reported variations of AssistiveMR that can support them in overcoming the challenges through different forms of augmentations, including the alteration, augmentation, and extrapolation of real-world information.Figure 2 provides an overview of the existing challenges and promising use cases for AssistiveMR.In the following section, we provide an overview of the everyday life challenges of our surveyed sample.We then report on the opportunities, and participants' concerns, when augmenting humans and their everyday life.

RESULTS
We present the challenges faced by individuals in their everyday life, how they currently overcome those, and where they see potential for the augmentation of humans and their reality in their day-to-day experiences.

Challenge 1: Comprehension of Real-World Information
A frequently mentioned challenge was the comprehension of information in everyday life, due to external contextual environmental conditions (e.g., night time) or poor vision (e.g., visual impairments).
There were comments on the impact of darkness on the participants' abilities to comprehend information visually.For example, some participants mentioned to not being able to "see clearly [at] night due to poor vision" (P12) and that they "want to be able to drive safely at night without any [vision] difficulties" (P31).Others mentioned they have "blurred vision" (P7), experience difficulties in reading small text (i.e., "understanding what [music] note I am seeing, due to the proximity of the notes and their size" (P23)), and are not able to properly read road signs at a further distance.Both P20 and P34 brought up their difficulty in comprehending other types of real-world information, such as assembling objects based on textbased manuals or learning to play the guitar based on non-visual instructions.
4.1.1Existing Assistance.Participants mentioned several approaches they apply to help them in comprehending information.First, existing technologies such as glasses, mobile devices, and screen readers were brought up as tools that help them to transform realworld information into readable information (e.g., translations on mobile devices or by using screen readers).Some expressed they leverage their mobile device to enhance lighting conditions, employing functionalities such as turning on the flashlight on the back of the device.Additionally, they employ the device's camera to capture written text and various real-world information to then subsequently magnify the content by zooming in for improved visibility and accessibility.While the use of existing devices and services were mentioned by the participants (e.g., "[...] try and take pictures with the max zoom" (P23) or "using YouTube tuition to learn [to play the guitar]" (P34)), there were responses that indicated the need of human assistance when facing difficulties in comprehending real-world information (e.g., "getting people to help you" (P50) and "most of the time I ask for someone's notes" (P23)).
4.1.2OverlayMR: Visual and Auditory Overlays in Reality.Participants mentioned various use cases for AssistiveMR to overlay reality and support them in their comprehension of real-world information.For example, they noted that visual overlays can help them to "highlight the lanes of a complex intersection and warn about pedestrians and cyclists" (P49) or, in the context of an assembly task, "visualize how to assemble furniture" (P20).Related to the driving scenario, one participant mentioned to envision wearing "glasses which will superimpose a daytime image of exactly where [they are] driving onto the lens, thus making it appear as if [they are] driving during the daytime, but still be able to see other objects such as other vehicles/people on the road" (P34), see OverlayMR in Figure 2. P34 added to combine both visual (glasses) and auditory augmentations (earpiece) to then "see the guitar fretboard [and] lights, or indicators, will appear to show where to place which finger, on which string.The sound of the chord will play in my ear, and then I can compare how it sounds with the chord that I play" (P34).Others mentioned to envision visual overlays on top of real-world objects in a warehouse that "depict [the object's] stock, amount ordered and amount arriving" (P33).Some individuals have suggested the use of visual overlays that are visible to bystanders as a helpful aid.For instance, it has been noted that these overlays could assist elderly individuals by informing bystanders to help in carrying their groceries home as they walk alongside.There were also comments on an altered and extrapolated reality that enables users to "wear glasses with eye tracking capabilities that enhance/zoom to signs of their choice" (P22) and to use "an earpiece that will immediately filter the outside noise to adjust to [their] preferred level.If the noise is too low it will amplify it, and if the noise is too loud it will reduce it" (P8).One participant raised that AssistiveMR could assist them in extrapolating a recipe from a specific food: "if I have a cookie it will give me the exact recipe of [it].It will help to improve cooking abilities and it will save a lot of time to replicate a specific recipe" (P4).Another participant, P28, mentioned that such extrapolations of realities can assist them in comprehending scenarios and retrieving work flows and procedures pertaining to it.
Visual and auditory overlays that augment an individual's reality (i.e., OverlayMR) can assist in humans' comprehensions of real-world information.For example, they can support humans when driving in dark or overlay the ingredients next to a dish to support information retrieval.

Challenge 2: Staying Focused in Daily Life
Another core challenge voiced by the participants is to stay focused in everyday life without getting distracted by the surroundings.For example, participants voiced to "struggle with concentration" (P15) and "not being able to concentrate on one specific thing" (P2), especially in noisy places or in spite of distractions, as voiced by (P50), e.g., people passing by their office desk.

Existing Assistance.
Participants mentioned various approaches that help them to stay focused, including taking medication (i.e., "I am taking medication to help with this" (P15)), relocating to a corner desk where there is less noise from other people (P50), and using apps on their mobile devices that help limiting screen times to minimise overall distractions (P55).4.2.2FocusMR: (Partial-)Isolation from Reality.As the participant feedback suggests, people can envision various methods to enhance their focus in everyday life.For example, using glasses that "record what the lecturer says and then shows it as written words" (P32) as this assists in "better focus on the topic and don't be distracted easily" (P32).Related, P56 asserted that experiences in an augmented reality "should be able to store and replay any of the study material on [their] screen [...] will help [them] to be more efficient and less distracted" (P56).Moreover, there was a suggestion that a variant of AssistiveMR should possess the capability to "[detect] how much time [someone] spend[s] procrastinating and try to limit all potential distractions" (P55).Others, such as (P17), mentioned that mobile devices are often a source of distraction, and that AssistiveMR can help by visually filtering out content that you would see on such devices.
Isolating humans from their real-world surroundings by, for example, diminishing real-world sources of noise, can help them to better focus.For example, FocusMR in Figure 2 can help individuals to focus in crowded spaces by dismissing their real-world surroundings using visual/auditory filtering.

Challenge 3: Recalling Information
Additionally to the challenges around comprehending real-world information and staying focused, participants voiced that recalling information is a key challenge in their everyday life and impacts their daily routine.Several participants reported to have difficulties in forming "clear memories" (P6) and "always [forget] what [they] wanted to do" (P42) in their day-to-day tasks.

Existing Assistance.
Participants reported to write down notes they visit often by, for example, using a personal journal where they write down important information (P52).Others, such as P5, voiced to use reminders, e.g., notifications on mobile devices, that assist them in recalling information.

MemoryMR:
In-Situ Recall Support.AssistiveMR was perceived as potentially being useful in supporting memory tasks and recalling information in day-to-day tasks.For example, (P6) mentioned that glasses can store information they read and save it for later recall.In line with Section 4.2.2,P56 mentioned that a variation of AssistiveMR "should be able to store and replay any of the study material on [their] screen in easy to understand and clear manner" (P56).While the previous two examples are based on visual information that is recorded and then stored for recall, there were also comments about collecting auditory data that will help in recall tasks.For example, participants envisioned an augmentation that "will help in collecting data from [them] while [they] speak it out loud. it will help with when [they] forget [they] can always go back to it and replay the memory" (P42).Others voiced that augmentations of their reality should be able to help them "remember important information for work and life in general, for example, remind [them] of important work meetings, friends, and children's birthdays and to meet work deadlines" (P52).
MemoryMR can assist in documenting and preserving reallife experiences, facilitating the retrieval of important information when needed at a later time.For example, MemoryMR in Figure 2 could record an individuals' speech and remind them about associated important information in real-time.

Challenge 4: Time Management and Preview
There were comments about the difficulties of time management and previewing future everyday life tasks.For example, one participant mentioned not being able to visit multiple physical locations in their everyday life (i.e., "there are so many exhibitions and galleries I would love to visit, but I am short on time" (P18)).In a similar vein, participants noted they do not plan ahead, and that this significantly impacts their everyday life: "Sometimes I fail to allocate time to complete or progress certain tasks" (P16) and "I didn't prepare ahead and need to find a hotel while driving" (P35).Several participants mentioned challenges in imagining and anticipating their forthcoming reality in the near term.For example, participants noted to face challenges in thinking about their day ahead and preparing it.Specifically, P25 brought up to be challenged by the fact "how a certain thing looks on [them] without buying the item first" (P25).In line with the imagination of near-future experiences, P20 noted they are often not able "to actually try on e.g., clothes, shoes or jewelry, before purchase" (P20).

Existing
Assistance.P18 voiced to have access to digitalised exhibitions and art works, which they can visit any time, to overcome their challenges related to time management.Furthermore, participants, such as (P57), voiced to make use of their mobile device to help planning events and activities ahead.For the challenges of not being able to preview everyday life experiences, for example, try on products before purchasing them, (P25) mentioned to use Adobe Photoshop, allowing them to photo-edit a picture of them that shows how they would look like in the new clothes.Others, such as (P20), noted to use size guides or look at photos online to get a rough sense of the clothes before buying them.4.4.2SimulateMR: Simulate (Future) Experiences.Within the scope of augmenting humans and their realities, as touched on in Section 4.4.1, participants mentioned to use simulated environments to resemble real-world experiences that then enable them to enter emulated realities whenever they want.For example, P18 expressed that "wearing augmented reality glasses [would allow them to] enter the museums and galleries whenever [they] want.[...] they are brought to life in [their] home via the augmented reality glasses, [they] can move around the exhibit, experience its scale and the detailing up close" (P18).In a similar vein, participants mentioned variations of AssistiveMR can help them to try on jewelry or clothes without actually trying them on, e.g., virtually trying on clothes, shoes, and jewelry to visualize the product before purchasing it (P20).
Augmented realities can support individuals in accessing digitalised versions of realities at any time and at any location, assisting them in previewing future everyday life experiences.For example, SimulateMR in Figure 2 could enable individuals to visit another reality, e.g., a museum, in the context they are currently in, e.g., from their living room.

Challenge 5: Difficulties in Communication
We received a plethora of insights into participants' difficulties when communicating with others.First, participants reported to face significant challenges in understanding the languages beyond their native language, e.g., "communicate with a person that does not speak my language and neither speaks English" (P4).Second, there were challenges around articulating thoughts so that they are understandable by others, which often led to misinterpretations in participants' past day-to-day experiences (P6, P44).P32 stated that communication is often heavily impacted by environmental conditions, such as real-world noise.Others, such as P18, noted it can be challenging for them to convey ideas or concepts just by using words.
4.5.1 Existing Assistance.Participants, such as P33 and P54, noted to utilise their mobile device for communication challenges, e.g., Google Translate [9] to assist in multi-language conversations.For the challenges around proper articulations of thoughts, visuals were considered to be helpful to bring ideas to life, e.g., by searching online to find suitable images or merely sketch out ideas on a mood board (P18).In the case of communication challenges, participants indicated to convey their point to a third person who will then be able to raise the point to everyone else involved.Several participants, such as P32 and P35, brought up that asking people to repeat what they have said often helps them in their communication.However, P22 added to "get embarrassed when doing it too many times so [they] have resorted to lip (P22).4.5.2CommunicateMR: Supporting Social Communication.By introducing AssistiveMR, participants envisioned various augmentations that can assist them in their day-to-day communications with others.For example, the majority of the participants envisions an augmentation that allows them to be in control of the presentation modality, e.g., transfer speech → text or change the language in real-time.One participant mentioned that such augmentations "should be able to effectively listen to what is being spoken and translate effectively to the desired language quickly" (P15).P54 referred to earpieces in their ear that would allow them to "translate a language [they] do not understand into [their] native language" (P54).Another participant, P15, highlighted that human augmentations should have the capability to offer visual representations to individuals with hearing impairments, aiding in conveying communicated information effectively, similar to what we showcased in Figure 3 in our supplementary material.While many comments were on social communication, one participant voiced that in the context of communicating with their partner, whose native language is different to the participant's, it would be "cool if [AssistiveMR] could overlay real objects with the name in the foreign language to help [them] learn the language [of their partner]" (P39).
CommunicateMR can support and enhance social communications, including modality switches (e.g., from auditory → visual) and language changes (e.g., real-time translations across languages).For example, CommunicateMR in Figure 2 can transform speech into semantically-related visualisations.

The Dark Side of Augmenting Humans
In addition to inquiring about the existing challenges encountered in daily life and exploring how different forms of AssistiveMR could enhance individuals' daily activities through augmented, altered, and diminished realities, our objective was to delve into the unique challenges introduced by augmenting humans and realities.In the following, we discuss six core challenges voiced by the participants when augmenting humans and realities in the realm of a pervasive augmented reality.

Social Acceptability of Human Augmentations in Everyday Life.
Participants voiced concerns about the societal impact of variations of AssistiveMR and how this might reshape society.For example, participants noted that the idea of using augmentations to augment and support humans in their everyday life can "cause dependency in such that instead of someone learning a language, they could rely on the artefact to do the job" (P16), and that "relying on a combination of assistive MR artefacts and AI could have profound and negative impact on employment" (P14).Others indicated that human augmentations can be perceived negatively, with one participant bringing up the example of using artefacts as a replacement to simulate human exchange, which they think is the wrong approach: "though it can be argued that it fulfills a need, it also masks the underlying problem of cultural tendency for detachment" (P35).4.6.2Clear Benefit Over Existing Technologies.In the domain of human augmentations, participants underscored an additional facet: the need for novel experiences that evidently outperform existing solutions.A key question raised was to what extent novel AssistiveMR experiences surpass the utility offered by current technologies.For example, participants mentioned that in the case of a visually-impaired person who wants to read about the ingredients of an item while cooking, "it could be a case of over designing a solution where easier ones exist (e.g., braille or optical character recognition devices)" (P56).Furthermore, one participant mentioned to have previously experienced "something like a Heads-Up display for overlay of useful information while driving, [...] which gives you directions to get somewhere (on foot) via a superposition of path and arrows on the image of the streets as seen in the phone camera.That, however, is worse than just having a map you check now and then, because it requires you to walk around permanently looking at the phone instead of the surroundings.Any kind of [augmentation] of this sort needs to be unobtrusive" (P35).Participant stated that when facing difficulties reading something on a screen, a screen reader might just do the job and "advanced" technology might be too expensive (P17).

Costs of Errors.
Even if augmenting humans and realities using advanced mixed reality technology provides clear advantages over the use of existing technologies, participants were still concerned about the implications of potential errors.For example, "if humans always rely on these methods, there is the risk for them to not being able to adapt in a difficult environment without them" (P4).Furthermore, participants noted that inacurracies could lead to wrong translations and descriptions of, for example, real-world objects.These inacurracies could then result in misinterpretations and might have emotional and physical consequences.For example, P17 explained this with the wrong classification of temperatures where "any information transfer error in the definition of hot or cold may injure a person" (P51).

Human Augmentations Cannot Fully Replace Humans.
There was a consensus that variations of AssistiveMR might not be able to fully replace humans.For example, participants commented that when comparing technologies such as AssistiveMR with human assistance, human assistance "can often be more flexible and adaptable to different contexts and needs" (P7).Furthermore, participants mentioned that "in a setting where emotional support or human contact is needed, such as a care facility, [they] don't believe [human augmentations] would bring better comfort than a 'real life' interaction" (P34), implying that emotional support and human understanding, coupled with artistic expression, might be challenging to fully replace using technology only.4.6.5 High Costs of Augmented Humans and Realities.As hinted in Section 4.6.2,participants expressed concerns that the adoption of AssistiveMR could potentially lead to elevated costs for users.For example, participants noted they feel "it would be expensive [...] whereas a relative or friend might help you without expectation of payment" (P43).Additionally, they mentioned that the integration of human augmentations in everyday life depends on many additional things, including financial stability and that it might be too expensive for "those who may need it the most" (P20).
4.6.6Privacy and Safety Concerns.Participants also noted concerns regarding their privacy and safety in the context of augmenting humans and their daily experiences.For example, P33 voiced that "driving navigation using assistive mixed reality is less suitable as it could provide distractions and clutter the vision and affect the concentration" (P33).Moreover, the identification of unpredictable augmentation inaccuracies, as highlighted in Section 4.6.3through the illustration of errors in defining terms such as hot or cold, was considered crucial for safety.For privacy, there were comments around the "concerns [of] unauthorized access to sensitive information [...] could outweigh the benefits, making conventional methods or human assistance more preferable in maintaining confidentiality" (P19).Furthermore, participants questioned "how much of the user's data will be collected in order for them to be able to use these technologies; and are [potential future users willing] to provide that information" (P20).

DISCUSSION
The outcomes of our survey underscore a multitude of opportunities within the realm of assistive mixed reality interfaces that aim at assisting individuals in their everyday life challenges.These possibilities range from enhancing individuals' everyday life experiences, such as aiding them while driving in low-light conditions, to seamlessly presenting ingredient information alongside a dish for efficient information retrieval.Overall, our work sheds light on the potential of augmenting human experiences and their real-world surroundings to address challenges associated with understanding real-world information, maintaining focus in daily life, retrieving information, managing time, previewing near-future realities, and facilitating communication with others, as presented in Section 4.However, alongside these promising use cases, there are noteworthy negative aspects that may impede the widespread adoption of human augmentation technologies.In the following, we delve into a discussion of our findings, offering insights into both the promising prospects and potential obstacles associated with the augmentation of individuals and their real-world surroundings.

A Clear Benefit of AssistiveMR over Existing
Technologies and Human Assistance is Prerequisite ...
Our survey highlighted that, yes, there is a clear need for advanced assistive technologies that support individuals in their everyday life using various human augmentations.However, from the participants' responses, we also noticed that for variations of Assis-tiveMR to find widespread adoption in everyday life, there must be a clear benefit over existing technologies and assistance.For example, apps such as Google Translate [9] for mobile devices can already support, and enable, communications with others (e.g., by translating speech into the communication partner's native language).The central question here is how can advanced visual interfaces using AssistiveMR enhance the support of users in their communication?Many of the discussed alterations and augmentations of reality, see Figure 2, require real-world sensing capabilities, which introduce privacy concerns.For example, what elements of reality are permissible to alter/augment in everyday life scenarios (i.e., perceptual agency [30])?Furthermore, embedding bystander consent [32] into the concept and design of human augmentations becomes crucial when augmenting individuals and realities, especially when experiences take place in semi-public or public environments.
Going beyond a technology-centric view of an augmented reality, our survey strongly suggests that augmentations relying solely on advanced technologies are unlikely to completely replace human involvement, see Section 4.6.4.It is essential to explore how these innovations not only enhance but potentially enable interactions in reality that might be otherwise impossible, which requires additional comprehensive explorations regarding the integration of augmented humans and realities (and potential societal harms [2,30,31]) into the fabric of daily life.

... But Human Augmentations Can Contribute to an Enhanced Everyday Life
Despite some concerns regarding the advent of an augmented society, there is a wide application area for human augmentations.Taking CommunicateMR as an example, there is already commercial work that explored the use of augmentations to facilitate human-tohuman communication.For example, Google showcased in 2022 a set of glasses that are capable of translating and transcribing speech in real-time [23].Whilst the idea of glasses that alter and augment social communications has not yet find widespread application, there is some early work that shows the potential of such augmentations.For example, OrCam MyEye [33] can instantly read text from a book, smartphone screen or any other surface to visuallyimpaired people.Through these use cases, it becomes apparent that human augmentations have the potential to enrich individuals' lives.However, it is noteworthy that systems of this nature are not currently fully integrated into society, and one could argue that we are still a considerable time away from achieving a fully augmented society.While we can only anticipate how a future of everyday human augmentations will look like, heading towards a computer-mediated reality, which can alter, augment, and diminish reality, seems to be promising.In fact, MR technology can be used to augment an individual's real-world environment to allow them to work and collaborate from any location using virtual displays and input devices [14,22], without the need for physical equipment.Others, such as Wolf et al. [42], showed how an augmented reality that uses in-situ instructions and a guidance mechanism to assist humans during a cooking tasks offers assistance to, for example, older individuals with declining cognitive function and increases their independence.

Next Steps for Human Augmentations in a Pervasive Augmented Reality
There are a few promising next steps that are worth discussing before concluding our work.First, we introduced participants to augmentations of humans using three concrete examples that cover visual, auditory, and tactile experiences.While these examples might have influenced participants' first thoughts about human augmentations, and their reporting of their everyday life challenges, presenting example scenarios was a necessary step to ensure participants understand the broad capabilities of an augmented future reality.However, we acknowledge that the design space of human augmentations extends beyond the scope of our investigation, as highlighted in Section 2.1 and Section 3.These augmentations encompass the integration of a sixth finger [38], the concept of MetaLimbs [35], and the application of smell and taste interfaces [3].We encourage future research to map out the design space of human augmentations and how various modalities, including visual, auditory, and tactile augmentations, contribute to a societal augmented and supportive everyday life.
Finally, although we captured the everyday life challenges of various individuals, including challenges they encounter at home, in the office, at school, or in a public space, we acknowledge there exist a plethora of additional everyday life challenges individuals face in their lives.This means that while our findings provide important insights into everyday life challenges and use cases of potential augmentations using variations of AssistiveMR, we do not claim that our exploration is exhaustive.Instead, we encourage future work to build upon our investigation to explore additional everyday life challenges and design and implement human augmentations that can overcome those, and to collectively contribute towards a pervasive augmented reality.

CONCLUSION
In this paper, we introduced AssistiveMR as a way to augment humans and their realities.By introducing participants to the idea of an everyday pervasive augmented reality, we were able to outline important fundamental work that contributes insights into existing everyday life challenges and how variations of AssistiveMR can be applied in a future augmented life that goes beyond individual human augmentations.We provided challenges in such an augmented reality that should be considered when contributing novel human augmentations, such as the potential high costs of inaccuracies and privacy concerns due to the sensing required for human augmentations in everyday life scenarios.With our work, we believe to make an interesting contribution to the augmented humans community by empirically collecting, for the first time, the existing everyday life challenges, and by surveying potential assistive augmentations that can support people in overcoming their challenges.In conclusion, we hope that our work will serve as a foundation for future advancements in human augmentations, specifically guiding their integration into individuals' real-world experiences and to overcome existing everyday life challenges.Hovering over the product description augments a spatial overlay and provides haptic sensations on the user's finger tips.

Step 1
Step 2 Step 3 Step 4 Example 2: Augmented Social Communication Anna's mixed reality glasses convert Tom's speech into British Sign Language (BSL) using an augmented miniature person.
Tom talks A microphone embedded in Anna's glasses records the conversation.
Anna's glasses transform speech into a BSL interpreter.

Digital Copy of Audio
Transformation of Audio into British Sign Language BSL interpreter spatially embedded into real-world environment.
Step 1 Step 2 Step 3 The haptic sensation is rendered on the finger tips.
Anna talks A camera embedded in Tom's glasses records the BSL.
Tom's glasses then translate the BSL into speech.

Digital Copy of BSL Transformation of British Sign Language into Audio
Step 1 Step 2 Tom's mixed reality glasses convert Anna's BSL into audio, which he can hear immediately through the speakers embedded into his glasses.

Anna Tom
Step 3 Augmented British Sign Language Miniature Audio Beacons for Augmented Social Communication

Figure 1 :
Figure 1: An example use case for AssistiveMR, showing how a visual augmentation can help a user to translate speech to their preferred modality in real-time.

Figure 2 :
Figure2: Examples of human augmentations that allow users to contextually alter, augment, and diminish elements in their real-world surroundings.OverlayMR: Alters the perception of reality by overlaying, e.g., a dark scene through the identical, but day-time, scene along with important contextual factors, such as other cars on the street.FocusMR: Diminishes real-world elements to reduce distractions.MemoryMR: Enhances memorability through an augmentation that enables users to seamlessly access past and future information.SimulateMR: Augments the real world through artefacts that allow the user to preview a future reality experience, e.g., a digitalised art museum.CommunicateMR: Enhances a bi-directional communication between two people through virtual augmentations that represent the spoken text in images.

Figure 3 :
Figure 3: Example use cases for AssistiveMR used in our survey to introduce participants to the idea of using human augmentations to support them in their everyday life.