skip to main content
research-article
Open Access

Multimodal Presentation of Interactive Audio-Tactile Graphics Supporting the Perception of Visual Information by Blind People

Authors Info & Claims
Published:07 June 2023Publication History

Skip Abstract Section

Abstract

Due to the limitations in the perception of graphical information by blind people and the need to substitute the sense of sight with other senses, the correct use of multimedia in the presentation of graphics is particularly important. The aim of the authors was to correctly present visual information in the tactile and audio form and to provide contextually selected information to reduce the existing cognitive barriers. In this article, the authors decided to research the method of exploring a tactile picture by a blind person and providing contextual and semantic information about the touched image elements using the developed tool for multimodal presentation of interactive audio-tactile graphics supporting the perception of blind people. The use of multimedia should improve the perception of the conveyed content. Therefore, the effectiveness of interpreting the information contained in the tactile image is verified by analyzing the tactile, audio, and contextual perceptions during the experiments. The following issues were considered in detail from the point of view of the blind and visually impaired: the recognition of shapes of image elements depending on their size and properties, the optimal width of elements displayed on the tablet screen, the time intervals between taps on an image element, and the acceptable length of graphic element audio description. The results from this study suggest the need to select the image presentation parameters in various perception channels to the user's needs. The findings of our research on tactile shape perception of geometric figures indicate potential problems in recognizing the shapes of figures in the case of wrong preparation of tactile pictures. In the case of a small difference in the proportions of the length of the sides of the figure or a slight difference in the measures of the angles of the figure sides, we have noticed that most blind people fail to recognize its shape. Considerable progress in improving such perception can be achieved by increasing the proportions in the lengths of the sides and the measures of the angles between the sides of the figure. Further steps concern introducing an alternative audio description of the properties of the figure so as to improve the interpretability of the figure shape. For most users, the width of a virtual line in a digital image displayed on a tablet should be around 5 mm for a corresponding 1 mm tactile line. The configuration of the time intervals defining the user's gestures (two-taps, three-taps) should be about 340 ms. Applying audio descriptions to tactile picture elements improves the understanding and interpretation of the information presented on it. Most participants in the test group accepted 5 to 10 seconds of voice prompts. Longer messages were incomprehensible, and details were hard to remember. In the proposed solution, the audio description can be divided into two or three different descriptions available to the user after tapping two or three times on an image element. It allows the user to decide on the amount of contextual information needed about the touched tactile picture element. The research attempted to show the typical values of various image presentation parameters and their ranges of values and indicated effective methods of their selection for a specific system user.

Skip 1INTRODUCTION Section

1 INTRODUCTION

An expeditious growth of internet services, mobile phone usage, and mobile applications results in an increasing amount of information presented in graphic form. However, not every person is able to receive the content of graphic information in the most beneficial way. People with sight problems have limited access to this form of communication or very often do not understand the overall meaning of the graphic. Undoubtedly, graphic information dominates chiefly in science education, such as Math, Physics, Chemistry, or Engineering.

As much as a graphical user interface (GUI) is an additional source of information yet practical and effective for sighted people, it may be confusing and, at some point, restricted for blind or sight-impaired users as the presentation of data has a visually centric nature [7]. Generally, the problem with accessing the GUI has every person with any kind of visual impairment or blindness, as WHO (World Health Organization) estimates such sight problems affect 2.2 billion people globally [5, 61]. Hence, it seems obvious that this group of society requires using alternative tools to experience the benefit of digital data.

The education of blind people uses screen readers and speech synthesizers to read aloud educational texts [33, 37]. Nevertheless, when we want to present a piece of information in a graphic form, this technique becomes ineffective. Only very basic pictures can be described in a text form and read aloud by the speech synthesizer. Non-visual access to graphics is often provided by a verbal description through screen readers, which is not always a satisfying solution as it offers a large amount of text which can overload the working memory. An alternative attitude to solving this problem can be using tactile graphics, where a user, through touch perception, is able to get some information about the picture [2, 9]. Tactile pictures include raised lines and elevated surfaces detectable under the fingers. Such pictures require proper preparation so that a blind user can recognize them and interpret their content. We can distinguish several factors that influence the perception and tactile pictures recognition, such as the following:

The size, shape, and spacing of the tactile elements relative to each other and their texture—there are guidelines and recommendations for designing such tactile pictures based on many researches involving blind children and adults with different levels of tactile perception of the fingers [18, 58]. The level of perception depends on the size of the hands and fingers as well as the sensitivity of the fingertips; e.g., elderly people have less sensitivity of the nerves located at the fingertips [6]. Human hand mobility also influences the perception of information presented in tactile graphics. Uncontrolled movements (trembling hands or fingers) have a negative impact on the perception of such information.

The alternative graphic presentation has many limitations. A blind person must create a mental image of the picture being touched with his fingers by repeatedly moving his hands over its surface and recognizing the elements placed on it. The tactile resolution of human fingers is low in relation to the resolution of the image seen as a whole. This makes it difficult to recognize the elements presented in the image. Moreover, the amount of information transferred in this form must be limited and adjusted to the perception and cognition level of a blind person [23].

In many cases, the information presented in the form of complex tactile drawings is tough to recognize. In such a situation, the support of an assistant is necessary, who describes verbally to a blind person the elements touched by him and their properties. An alternative to the assistant can be an additional semantic description of the image elements provided in the form of text (braille format) or audio description.

As blind people lose their fundamental sense, they become more sensitive to the other available senses to acquire cognitively all stimuli and information from the environment. By compensating for the loss of a sensory function, we can understand enhancing deficient senses to carry the missing information through an intake sense [53, 55]. Unfortunately, this approach is not free from mistakes and drawbacks, and as Kristjánsson et al. [27] points out, it can lead to sensory overload. Hence, we need to develop a method to extract only such portion of the information that is crucial for a blind person and simultaneously enables those people to avoid redundancies. In consequence, the attitude to convey the information should be task-oriented, and while designing the exercises, we must neglect to include too many aspects of the issue. Therefore, researchers should apply multisensory integration in their work [53], such as combining the sense of touch and hearing.

Applying the sense of hearing to improve the understanding of information presented in tactile form is the subject of many studies [18, 57]. As part of such research, the method of sharing contextual information, its format, and length were analyzed. Contextual information should be read when a user touches a particular element in the picture. There are various techniques for detecting touched elements in the image described later in this work (see Section 2). Moreover, an important problem with the perception of tactile pictures is the precise identification of the touched elements so that the contextual information is unambiguous and corresponds to a given element in the image.

The remainder of this article is organized as follows. Section 2 gives a brief overview of literature, popular solutions, and various approaches. Section 3 describes our developed platform and the experimental setup (research methods). We present our solution to assess the perception of graphic images by the blind. The results of the experiment are presented in Section 4. Finally, the article discusses the results and draws conclusions (Sections 5 and 6), including a set of refined design guidelines and principles of presentation of interactive audio-tactile graphics for the visually impaired using multimedia solutions based on standard tablets operating in the iOS or Android systems.

Skip 2RELATED WORKS Section

2 RELATED WORKS

One of the most commonly used forms of making graphics available to the blind is preparing them in the form of tactile graphics with the use of braille printers, 3D printers, or special swelling touch paper. Tactile graphics are usually used to present maps, diagrams, charts, geometric figures, and other images. Such tactile pictures are prepared in commonly used graphic editors in raster or vector graphics format. Rules for preparing such images are outlined in Guidelines and Standards for Tactile Graphics [62]. They take into account the limitations in the tactile perception of the hand of the blind, and their application ensures a better understanding of the graphic information presented in this way.

As we mentioned in the Introduction, applying only tactile pictures may be, in many cases, insufficient to properly understand the information presented on it. Therefore, it is important to provide extra contextual information about the touched elements at a given moment and their properties. Various types of research are carried out to detect the areas touched in the tactile picture using various technologies and tools.

2.1 Available and Popular Solutions for Tactile Graphics Presentation

One form of detecting touched elements in a tactile image is the use of video cameras and image recognition software to detect the position of the fingers and specific elements of the image and the environment. In [3, 20, 32], the authors marked the elements of a tactile picture with QR codes and then made the detection of fingers placed closest to a given code. On this basis, the audio descriptions of the touched image element were read. The authors of [10, 16, 43] used more advanced methods of image processing and recognition involving artificial intelligence methods. As the authors emphasize, the problem is to recognize and distinguish the fingers of the user's hand. That is why they often use additional markers placed on the user fingers. Frequently, obtaining precise results depends heavily on the lighting conditions and the position of the hand on the tactile picture [3].

Another popular solution focuses on dedicated tablets, e.g., the Talking Tactile Tablet produced by Touch Graphics and the IVEO by ViewPlus2 that have a connection to the computer. They are also provided with a dedicated application for preparing pictures with audio descriptions and a browser for interactive audio-tactile presentations of prepared graphics. This group of solutions has gained tremendous popularity as numerous research studies use them in educational processes [31, 44]. However, despite the great usable aspect of those devices, like the possibility to present tactile pictures in a convenient way thanks to the large size of the device, their high price and low mobility may be discouraging in common usage.

Dynamic haptic displays can be considered quite an innovative and prototypical device for graphic content presentation, where the pin matrix presents a part of a picture from the computer screen using dynamically pushed pins (taxel) [29]. This method is applied in devices like DotView from KGS Corporation, the Graphic Window by Handytech, or Graphiti Interactive Tactile Graphic Display by ORBIT Research. These devices convert graphical information from a computer screen into touchable images in real-time, which is their great value, though, they can present at once only a small part of the picture due to their tiny surface. We can also find the research where an element is identified on the dynamically presented tactile picture (the touches area is immediately recognized), but as the authors in [9, 60] point out, such method is chiefly used for map presentation and navigation of the blind user inside a building. Yet, the high price of such devices (5,000–40,000 euros) is another limitation in common usage. Work is also underway on new hardware solutions enabling multimodal audio and haptic feedback to enhance the touch user interface. The authors in the paper [15] propose audio-tactile skinny buttons for touch user interfaces where audio and haptic feedback is obtained by simultaneously applying various electric signals to the pairs of ribbon-shaped top and bottom electrodes. Another issue is the research on the identification of the shapes of 3D objects, which use 3D models whose size and division into parts are adapted to tactile perception of people with blindness [12, 56].

Many new studies also show the use of Android or iOS mobile tablets with a large screen diagonal (usually around 12–13 inches). The main assumption of this approach is to place a tactile picture (sheet of paper) on the tablet screen of the appropriate size adapted to the screen size. The tactile elements touched by the user are detected by the tablet screen. A piece of paper is not an obstacle for touch-capacitive tablet screens. The large screen size of the device allows us to place pictures on A4 paper size containing plenty of elements, and such size is well adapted to the tactile perception of the human hand. Tactile images can be printed with a braille embosser or on specialized swell touch paper, or printed via 3D technology. The advantage of the last technique is the possibility of differentiating the height of printed elements significantly. However, it is time-consuming and requires printers with a large working area (A4 size). Authors in the papers [19, 21] depict using 3D prints for presenting maps and building plans as audio-tactile graphics. The authors of these papers tested multiple types of graphics and obtained evidence that visual augmentation may offer advantages for the exploration of tactile graphics.

The authors of the MIDAS project [17] conducted research with the developed set of tools for recording touch gestures on the screen of tablets and their visual presentation when recognizing elements of a tactile picture. This tool helps assess the method of exploring the tactile picture elements by registering the behavior of the system user. The authors of all the above-mentioned works emphasize that the most essential is the precise detection of touched elements and the provision of contextual information about these elements in the form of audio description. However, the excess of contextual information or erroneous messages, e.g., about the neighboring element in the picture, irritate and distract the blind person's attention, which makes it difficult to properly interpret the tactile picture.

2.2 Designing for Visually Impaired

2.2.1 Voice User Interfaces.

People with visual impairment find voice interaction as the most convenient in terms of human-machine relations [22, 38]. Correct audio-description perception is a key element for blind people to properly understand the content presented in a graphic form. In the book Designing for Voice User Interfaces, Pearl [49], based on interviews with various experts (e.g., Chris Maury), presented the guidelines for designing voice user interfaces (VUIs) accessible to the blind. One of the recommendations is that VUIs ought to put in the first place personalizing over personality. In other words, e.g., the user should have the opportunity to choose what text-to-speech voice they prefer in their app. Voice preferences can be adjusted to their own specific personality. Another vital thing is to keep the information short and to include only the most meaningful content; hence in audio interface we must avoid skipping around and skimming, which are inherent features in visual interfaces. Design for VUIs has become more relevant in recent years due to the enormous advances of speech technologies and their growing presence in our everyday lives. Although modern VUIs still present interaction issues, reports indicate they are being adopted by people with different disabilities and are having a positive impact [42, 47, 51].

2.2.2 Haptic Interfaces.

Haptic perception is fast and lets the user respond naturally. The blind user, contrary to the sighted one, must rely on this perception in a full way to obtain the most benefit. However, lower resolution capability than audio is its disadvantage. Thus, it is only suitable for limited information feedback. If more details are needed, haptics solutions should be reinforced with another sense such as audio to boost resolution [41]. Very often haptic feedback interface is also used in navigation systems for the visually impaired and blind [13, 24]. The results of this work show a significant improvement in the assimilation of information about the surrounding environment.

Previous studies have shown that specialists in the area of the haptic claim that communities of visually impaired people are one of many other communities able to profit from haptic systems to reinforce their cognitive map of visual data [30]. What is more, the advantages of using this technology are set within the human-computer interactions (HCIs) context [7, 11]. To enhance students’ science learning, haptic solutions have been eagerly applied with the provision that students acquire scientific ideas exquisitely when they focus on taking a hands-on approach. They get sensory feedback in the form of haptic stimuli [46]. Haptic is inseparably connected to the sense of touch (e.g., force or tactile sensation) that interacts with computer applications. As many researchers claim the combination of features of haptic perception such as force, vibrotactile with hearing can boost better understanding of scientific concepts by the blind student and extend the amount of information to be acquired, as haptic senses serve as the main medium to map the world by visually impaired people [39, 59].

2.2.3 Context-Aware Design and Multimodal Interfaces.

The issues related to providing the user with the context aim to guide him to what he can do. In the case of GUI, it seems unambiguous, but when considering voice interfaces, there is no detection of features. For this reason, VUIs should keep informing the user what they can do or how they should respond. Another problem pointed out by Bradley and Dunlop is that the use of contextual information by visually impaired people will differ individually and collectively [8]. Hence, the application must allow an element of user control necessary to present contextual information suitable to the exercises.

The same authors in their experimental study [1] proved that when wayfinding instructions were prepared by visually impaired people, then the workload score was lower (according to NASA-Task Load Index result), less minor deviations occurred, and time to reach the goal was quicker among visually impaired participants. However, the same instructions used by sighted participants resulted in a higher workload score. It suggests the need for different approaches, including cognitive workload in both groups, to make good user involvement.

Visually impaired users cannot depend only on GUIs, though they have become a standard in human-computer interaction. This group of users’ needs alternate modes of interaction to convey the graphical information precisely, and it seems obvious that we need to apply many methods to replace sight (as the most rapid and precise medium of communication) [28]. Multimodal interaction is an approach where a user interacts with the system by combining different senses or modalities, such as gestures, touch, and speech [54]. This approach is found to be the best way to design graphical content by various sources [8, 27]. Dufresne et al. [14] conducted a test where they replaced a visual modality in a website with one or two modalities. It turns out that multimodality served better for blind and sighted participants, and it was ranked as the best interface, preceding the haptic, which was the second-best.

As part of this work, the authors decided to research the method of exploring a tactile picture by a blind person and providing contextual and semantic information about the touched image elements using the developed tool for multimodal presentation of interactive audio-tactile graphics supporting the perception of blind people. The use of multimedia should improve the perception of the conveyed content. Therefore, the effectiveness of interpreting the information contained in the tactile image will be verified by analyzing the tactile, audio, and contextual perceptions during the experiments.

As a contribution to the improvement of the science field, the development of guidelines for the effective use of multimedia to support the understanding of graphic elements presented in the form of tactile graphics for the blind can be considered. The following issues were considered in detail from the point of view of the blind and visually impaired: the recognition of shapes of image elements depending on their size and properties, the optimal width of elements displayed on the tablet screen, the time intervals between taps on an image element, the acceptable length of graphic element audio description, and usability test of virtual buttons for providing additional description about the displayed image.

Skip 3MATERIALS AND METHODS Section

3 MATERIALS AND METHODS

3.1 Developed Platform

The main concept of the study is to use tablets with the developed authors’ mobile application for the Android system, which would allow for audio description of elements on a tactile picture for the blind with simultaneous solving tests related to the subject of the printed and displayed picture. The general idea was to display the appropriate graphics on the device's screen and then place the paper sheet with the printed graphics exactly on the displayed image, whereby the graphics can represent any object or mathematical graph. By tapping on the tactile elements on the printout, the application detects the place of the tap, checks whether the touched place is a graphic object, and then provides its audio description or an appropriate voice message. Figure 1 shows the concept of a solution using a tablet with a capacitive touch screen and a sheet of tactile print placed on it. The concept of the solution and its description can be found in the previous work of the authors [34, 36].

Fig. 1.

Fig. 1. The concept of authors’ solution using tablet and tactile image.

3.2 Decomposition of the Picture Content into Atomic Elements

When creating the architecture of the solution, we decided to use vector graphic SVG files (Scalable Vector Graphics) based on XML (Extensible Markup Language) format. SVG files generate any graphic represented as a list of elements saved as paths with vectors describing a specific object. Each vector has its own specific attributes: style, size, thickness, and color, some of which are used to interact with the picture. The parameters that deserve special attention are <id /> and </onclick>. Their combination is the keystone when teaching the students how to work with the image and helps read the relevant messages.

3.3 Applying of Multimedia Support

The combination of SVG images, JavaScript technology (Figure 2), and image descriptions in JSON notation allowed for audio description of the prepared graphics. The scheme of the method preparation procedure is as follows:

The prepared exercises and descriptions are downloaded from the server and saved on the user's device.

After tapping with a finger on the appropriate element on the printed tactile paper, the application recognizes what it is after the <id /> parameter and runs a script written in JavaScript technology defined in the <onclick /> attribute.

The running script passes information about the element, and as a result, the application knows what description is to be sent to the speech synthesizer.

The speech synthesizer reads the corresponding audio description.

Fig. 2.

Fig. 2. The flow of information diagram during user interaction with the tactile picture in our system.

The next step is to prepare the SVG picture and audio description it in the web application developed by the authors [33]. The web application allows us to upload SVG images, assign identifiers to the objects, prepare tests, and customize descriptions. This part also includes downloading pictures in pdf format and printing them on braille paper. While the application is running, the user places a sheet of paper with a tactile print on the screen of the device.

3.4 The Presentation of the Picture to a Blind Person

In our solutions, we assumed that the appropriate image would be displayed on the tablet screen, and then a sheet of tactile image with the same content as displayed on the screen was put on the screen. By analyzing the user's tap location and comparing it with the displayed image, a collision is detected, which activates the synthesizer function responsible for reading the description correlated with a given element.

Figure 3 shows the actual use of the system by a blind person during the research. A sheet with a tactile print is fixed with an additional frame glued to the tablet case. Our and other authors’ tests have revealed that blind people have no problems with placing the printout on the tablet screen on their own [40].

Fig. 3.

Fig. 3. The real use of the system during tests by a blind pupil.

3.5 Module for the Acquisition and Analysis of User Interaction with the Tactile Picture

The developed platform has also implemented the functionality for registering, saving, and archiving taps and gestures performed by the user while using the mobile application. It is crucial in the context of the research carried out. It will also be possible to draw appropriate conclusions regarding the conducted research based on the collected data. Additionally, such a method of data collection holds the possibility of its later visualization in a specially prepared web tool (Figure 4).

Fig. 4.

Fig. 4. Web-based module for visualizing the interaction of a blind student with a tactile picture.

In the application, it is possible to display the elements touched by the test participant (red points) on the tablet screen in a selected time interval. The collected parameters included the following:

studentId: student ID in the system;

fieldId: identifier of the tapped element (optional);

testId: identifier of the tested test (optional);

x: the X coordinate of the tap;

y: the Y coordinate of the tap;

elementId: image ID in the system;

timestamp: timestamp of the tap recording;

type: the type of tap—single, double, or triple tap.

3.6 Research Group

Students and teachers of the Educational Center for Blind Children in Laski Poland were invited to participate in the research. The main test group consisted of 28 students between the ages of 12 and 16. All 28 students have native Polish nationality. Moreover, during the research, the applied solutions were actively consulted with the Center's teachers to obtain a pedagogical opinion aimed at improving the proposed methods of interactive presentation of information in the form of audio-tactile graphics.

Students in the group did not have any other disabilities. The group consisted of 11 girls and 17 boys, aged 12–16, with a median age of 14. The following inclusion criteria were adopted in the studies:

the degree of visual impairment or blindness was significant (more the 90%);

the students were from primary or secondary school;

there were no co-occurring emotional disorders.

3.7 Proposed Experiments

An important element of the use of multimedia is the synchronization of information streams and the mutual dependence of complementary content. Combining individual streams into one piece of information should give a synergy effect. In the case of blind people, this issue is even more important, as one of the principal senses (sight) has to be replaced with other senses and stimuli coming from touch and hearing. Therefore, as part of the proposed research, we divided the experiments in terms of touch, tactile, audio, and context perception analysis with the use of various multimedia content.

3.7.1 Analysis of Tactile Perception of Figures Shapes.

The recognition of shapes of image elements depending on their size and properties.

During the study, the ability to recognize shapes by the blind was analyzed depending on their size, the difference in the length of the sides, and the angles of the figures. We prepared a set of tactile pictures containing 10 figures of quadrilaterals (the length of their sides is from 2 to 4 cm), while the lengths of individual sides differ by 0.5, 1, and 2 cm (a total of 30 figures). A similar set of pictures was prepared for five types of triangles (Figure 5 shows the actual test stand used during the experiments).

Fig. 5.

Fig. 5. A tactile picture containing a sample tactile image with triangles used during recognition of shapes: (a) influence of side length differences on the proper triangle shape recognition; (b) influence of differences in angle measures on the proper triangle shape recognition.

Each tactile figure in the picture has its own audio description (in our solution), where after one tap is read, the figure number, and after two taps its type. During the research, each participant had to answer a question on the type of figure he recognized at the moment. However, the audio descriptions of the type of figure were inactive during the test. Additionally, the authors established a time limit for recognizing 10 shapes of figures in a single tactile picture at a level of 5 minutes.

3.7.2 Analysis of Touch Perception.

The optimal width of elements displayed on the tablet screen.

When we intend the gesture of tapping on an element of a tactile picture to be possible and very precise, it is necessary to synchronize the image in the tactile picture with the virtual image displayed on the tablet's image. In the tests, the tactile printout was immobilized with a frame prepared in 3D printing (black frame around the screen in Figure 5). In a situation where the object displayed on the tablet screen is too narrow, the gesture of tapping the corresponding tactile element will not be correctly identified, as shown in Figure 6(a). Therefore, it is important to properly select the width of the elements displayed on the device screen so that the user's gestures are correctly recognized (Figure 6(b)). In some circumstances, when the width of the elements displayed on the screen is too large in relation to the distance of two elements in the tactile picture, a user can receive the description of the wrong image element—successive layers of the virtual image (Figure 6(c)).

Fig. 6.

Fig. 6. Effect of the width of elements on the tablet screen on the effectiveness of recognizing touched elements: (a) missed element on the screen, (b) hit element on the screen, and (c) incorrectly recognized element.

The designed study aimed at determining the optimal value of the stroke-width attribute and its effectiveness, for which a blind person needed the least number of attempts to tap on an element to obtain its audio description. The study assumed the use of seven different values: 10, 20, 30, 40, 50, and 60 px. As to the resolution of the tablet used in the tests (Full HD 1920 × 1080p) and the screen size, the individual values expressed in pixels correspond to the values in millimeters, respectively: 1.5, 3.0, 4.5, 6.0, 7.5, and 9.0. In the case of another device (tablet), it is required to use the dimension given in the text in millimeters and convert it to the number of pixels, taking into account the pixel geometry, the screen size, and resolution of the device.

Each participant of the research was given a picture with six geometric figures (square, rectangle, triangle, circle, trapezoid, and rhombus). The contours of the figures in the tactile picture were 1 mm wide, while the corresponding contours of the figures displayed on the tablet screen were 1.5–9 mm wide, respectively. Tapping a finger on the element placed in the tactile picture is imprecise (in most cases the finger contact point with the tablet screen is in a different place—near the element displayed in the picture) and may not activate the alternative voice description of the element. A sighted person using a stylus (pencil) can achieve considerably better precision. The participant in the study was asked to tap 10 times anywhere on the contour of each figure. The number of reading audio messages was counted.

The time intervals between taps on an image element.

The prepared system assumes that context information about the touched element can be provided by detecting the gesture of single, double, or triple tapping on the element. The study targeted to determine the minimum time between subsequent taps on an element so that they could be interpreted as a double or triple tap. The research participant received a picture containing five different contours and was asked to double-tap and then triple-tap the sides and vertices of the figures (each time a different figure). For each figure, the gesture of two and three taps had to be made 10 times (50 times in total).

Two experiments were planned. In the first one, based on literature analysis [40], we selected the time interval between taps equal to 300 ms by examining the statistics of correct and incorrect gestures classification. In the second one, we analyzed the times between taps of a specific user and selected the interval value grounded on descriptive statistics. The three-tap gesture used an equal interval between each tap (first and second and second and third taps).

The assumption in both experiments was that a subsequent tap at a time distance below the set interval is classified as a double or triple tap. Exceeding the interval causes the system to start to recognize the next tap gesture from the beginning.

3.7.3 Audio Perception Analysis.

Taking into account the guidelines and limitations presented in Section 2, the sense of touch is not sufficient in many cases to correctly interpret and understand the information given in the tactile form. Therefore, the medium in the form of audio complements the sense of touch so that a blind person can correctly recognize the information. However, this requires adapting such a description, its length, and detail, including all limitations of cognition load [35].

The acceptable length of graphic element audio description.

The research proposed a set of tests aimed at an understanding of audio descriptions with a similar complexity of concepts but with different lengths and detail of voice-readable messages. Each participant listened to 10 audio descriptions of different image elements with lengths up to 5, 5–10, 10–15, 15–20 seconds (at the default reading speed of the TextToSpeech synthesizer in the Android system). When using a different synthesizer and a different playback speed, the indicated message reading times will change. Bearing in mind the cognition load [26, 50] in individual messages, the number of introduced concepts was limited to a maximum of eight (according to the literature [45, 48] this is the threshold above which the level of information saturation in short-term memory is limited and may affect the misunderstanding of the information presented in this way).

In the next step, after listening to 10 messages of a given length, the participants were questioned by teachers about the intelligibility of the heard messages using the think-aloud protocol. The teachers then used it to determine the level of understanding of the information provided to the students. Examples of messages of different duration used during the research are presented below:

Audio message 1 (max. 5 seconds): “The image presents the graph of the quadratic function -x power 2 -6x-5.”

Audio message 2 (5–10 seconds): “The image presents the graph of the quadratic function -x power 2 -6x-5, vertex of the parabola is point with coordinates (-3,4).”

Audio message 3 (10–15 seconds): “The image presents the graph of the quadratic function -x power 2 -6x-5, vertex of the parabola is point with coordinates (-3,4), zero places of the function are -1 and -5.”

Audio message 4 (15–20 seconds): “The image presents the graph of the quadratic function -x power 2 -6x-5, vertex of the parabola is point with coordinates (-3,4), zero places of the function are -1 and -5, the function is decreasing from -3 to infinity and increasing from minus infinity to -3.”

3.7.4 Analysis of the Context Perception of the Information Presented in the Picture.

Usability test of virtual buttons for providing additional description about the displayed image.

Appropriate audio descriptions of the presented picture are crucial for a blind person in the process of recognizing the elements placed on the graphics. Preliminary research carried out by the authors, and consultations with blind people and their teachers allowed us to draw conclusions that in the case of more complicated graphic pictures, a blind person may have difficulties in understanding the content and finding individual graphic primitives. Based on the information collected from students and teachers, the authors decided to place extra buttons (Figure 7) in the upper-right screen corner (audio-touch interface) for describing the overall features of the picture in the form of the following:

Circle sign: to provide extra information about the elements in the picture;

Plus sign: to provide additional information, such as the necessary mathematical formulas, etc.

Fig. 7.

Fig. 7. Usability test of virtual buttons for providing additional description of the displayed image; the blue frame indicates additional buttons in upper-right corner.

The proposed solution was to enable the user to become immediately familiar with the content of the picture through a general description of the elements included in the picture. Six exercises in the field of mathematics were used for the research: three related to planimetry and three related to linear functions. The aim was to check the usability of the (additional) interface and to determine the frequency of using elements prepared in this approach.

Skip 4RESULTS Section

4 RESULTS

Due to the extensiveness of the results presented below, Table 1 contains the number of paragraphs regarding both the description of the experiments carried out and the obtained results.

Table 1.
Study topicParagraph with a description of the experimentParagraph with a description of the results
Tactile perception analysis3.7.14.1
Touch perception analysis3.7.24.2
The optimal width of elements displayed on the tablet screen3.7.24.2.1
The time intervals between taps on an image element3.7.24.2.2
Audio perception analysis3.7.34.3
Context perception analysis3.7.44.4

Table 1. Number of Paragraphs Regarding Both the Description of the Experiments Carried Out and the Corresponding Results

4.1 Tactile Perception Analysis—Results

Tables 2 and 3 present the results of the research as the average effectiveness of recognizing the shapes of geometric figures depending on their size and shape, expressed in a percentage normalized scale. The first experiment concerned the effect of the length of the sides of figures (their proportions) on the effectiveness of shape recognition, and the research focused mainly on equilateral and isosceles triangles, squares, and rectangles. It is observable in the case of a slight difference in the lengths of the sides (0.5 and 1 cm), the correct recognition of the figure shape fluctuates on the level of 59–87%. As mentioned in the Introduction, the tactile resolution of the fingers is relatively low, and therefore it is difficult to correctly recognize the shapes of figures with little difference in their side length.

Table 2.
Side length differenceTrianglesQuadrilaterals
EquilateralIsoscelesSquareRectangle
0.5 cm91%59%94%78%
1 cm64%87%
2 cm86%97%

Table 2. Influence of Side Length Differences on Figure Shape Recognition Correctness

Table 3.
The deviation from the right angle in degreesTrianglesQuadrilaterals
RectangularAcuteObtuseParallelogramIsosceles trapezoidRectangle
1093%14%16%27%34%95%
2074%64%54%68%
3094%89%88%93%

Table 3. Influence of Differences in Angle Measures on the Correctness of Figure Shape Recognition

The second experiment concerned the influence of differences in angle measures on the proper figure shape recognition (results from Table 3). The research checked the effectiveness of figure recognition when the deviation from the right angle in the figure differs by 10°, 20°, and 30°. It has been apparent that for a small deviation (10° and 20°) from the right angle (range 70°–110° degrees), the recognition is relatively low and for a deviation of 10° and 20° is. respectively. 14–34% and 54–74%. Similar to the first experiment, the lower detection is caused by a low perception of the figure shapes for which the deviation from the right angle is relatively low.

The developed system also facilitates training perception while recognizing the figures’ shapes by applying an audio description that supplies the information about the figure shape or its properties after double-tapping on its contour. A blind person listening to a voice description of a figure shape can put more attention on the analysis of the proportions or angles of the sides between the sides of the figure and develop the ability to better recognize the properties of the figure, creating additional mental image patterns.

4.2 Touch Perception Analysis—Results

4.2.1 The Optimal Width of Elements Displayed on the Tablet Screen.

The results presented in Table 4 show the maximum effectiveness achieved in reading audio description of touched element in the tactile picture by detecting a tap with an object when the width attribute equals 30 pixels. The description was read in the first attempt with 96% effectiveness (average for the entire research group). On the other hand, the lowest efficiency was shown by the line thickness equal to 10 pixels, where the result was only 17%. As the width of the line increases, the interactive area of the figure increases, making it easier for the user to activate reading the audio description. Too narrow line width in the picture on the tablet makes it difficult to precisely hit the element on the tablet screen (Figure 6(a)).

Table 4.
Line widthNumber of read messagesCorrect itemwas hitNo itemwas hitNo neighbor item was hitEffectiveness
10 px10117602417%
20 px257110895843%
30 px577554131296%
40 px526463224188%
50 px486394316181%
60 px434312309272%

Table 4. Confusion Matrix and Average Effectiveness of Reading Audio Descriptions for Different Line Width of Graphic Elements on the Tablet Screen

We can observe that for the values of 40, 50, and 60 pixels, the reading effectiveness of the proper audio descriptions decreases. It is caused by undesirable overlapping of objects (successive vectors from the SVG file displayed on the screen of the device), which results in an incorrect reading of the description (e.g., of a neighboring element). Figure 6(c) illustrates this situation. In addition, the analysis of the detailed results of individual users showed a partial dependence on the motor stability of the user's hand and its finger size.

4.2.2 The Time Intervals between Taps on an Image Element.

The participants took part in two experiments.

Experiment 1. Figure 8 shows the number of correctly recognized two-tap gestures in the form of a histogram in the research group of users with the assumed constant time interval of 300 ms (based on the literature review [40]). For half of the study participants, the effectiveness of recognizing a gesture as two-taps was 70%. Four participants achieved levels below 40%. The authors assessed this level as too low and proposed to conduct a second experiment in which the average time interval between taps in the research group was measured.

Fig. 8.

Fig. 8. Histogram of the number of correctly recognized two-tap gestures in the research group (with gradation of five), for a fixed interval between taps of 300 ms.

Experiment 2. In this experiment, users were asked to tap twice on the contours of the figures. The designed system recorded the time intervals between each tap. The mean time obtained was 332 ms, median 339 ms, and standard deviation 33 ms. Figure 9(a) shows the distribution of time intervals during the experiment. After determining the average time for the whole group, we repeated experiment 1. This time the effectiveness of a correctly recognized double-tap gesture above 70% was achieved by 18 participants. At the same time, the number of participants with a success rate of less than 40% decreased to two students (Figure 9(b)).

Fig. 9

Fig. 9 (a) The distribution of time intervals between two-taps during the experiment. (b) Histogram of the number of correctly recognized two-tap gestures in the research group (with gradation of five), for the interval between taps of 332 ms (time obtained during the experiment).

4.3 Audio Perception Analysis—Results

Each of the 28 participants of the research group, after listening to consecutive groups of messages differentiated in terms of length, commented on their intelligibility—whether their length was appropriate and made it possible to remember the information provided (think-aloud protocol). Furthermore, the teacher verified the intelligibility of individual messages by asking detailed questions of the participants. The test results are presented Table 5.

Table 5.
Feelings and comments of the survey participant (based on think-aloud protocol)Understanding level(assessed by the teacher)Audio description length at the default reading speed of the TextToSpeech synthesizer in the Android system
Understandable, acceptable97%<5 s
Understandable, appropriate length, amount of detail acceptable84%5–10 s
They seem too many concepts in the message, it was difficult to remember all the information36%10–15 s
Too many concepts in the message, hard to remember and understand, I don't remember what was in the beginning, I got lost, too many details.17%15–20 s

Table 5. Assessment of Understanding and Perception of Audio Description with Different Duration of Audio Messages

For audio messages of 15–20 seconds long, the number of concepts introduced was too large and unacceptable for most participants of the experiments. Only two survey participants assessed these messages positively, which was also confirmed by the teachers’ assessments. The teachers described these students as gifted with good auditory memory.

4.4 Context Perception Analysis—Results

The developed tool helped visualize and determine the statistics of screen taps performed by individual students while solving the exercise. Figure 10(a) and (b) show the characteristic map of taps on the screen for the selected user in the selected time interval.

Fig. 10.

Fig. 10. Usability test of virtual buttons for providing additional audio description of the displayed image example: (a) exercise 1 and (b) exercise 2. Red markers indicate the places in the tactile image touched by a blind student during the experiment.

The usability of the proposed buttons (additional interface functionality) was tested by the number of their uses; the results are presented in Table 6. The information in the table points out that among the taps on voiced elements, as many as 21.49% concerned taps on the proposed virtual buttons. The detailed results allow us to conclude that this ratio varies depending on the complexity of the picture and the content from 9.89% to nearly 30.05% and increases with the complexity of the exercise.

Table 6.
PictureTotal number of taps on elementsTotal number of taps on buttons 1–4Number oftaps onbutton 1Number oftaps onbutton 2Number oftaps onbutton 3Number oftaps onbutton 4Effectiveness
13861163434321630.05%
2430703832--16.28%
3486984454--20.16%
4196582830--29.59%
57016106--22.86%
618218108--9.89%
Total1,750376164164323221.49%

Table 6. Usability Test of Virtual Buttons for Providing Additional Audio Description of the Displayed Image

Skip 5DISCUSSION Section

5 DISCUSSION

Our work has led us to many practical conclusions about the improvement of the effectiveness of tactile graphic perception by blind people and outlined the way of tactile picture preparation (guidelines) supported by multimedia extensions (sound and audio description). It also explained what factors affect the effectiveness of their recognition.

Many components influence the effectiveness of the correct recognition of the shapes of elements in a tactile picture: elevation of raised lines and object surface, the thickness of lines (contours of objects), size of tactile elements, and the distance between elements in a tactile image. The correct interpretation of the information presented in a complex tactile picture is effortful when numerous elements are placed on it. Therefore, it is necessary to add further context information about the semantics of the elements in the picture as a supportive point in interpreting the presented content by a blind person.

The findings of this study on tactile shape perception of geometric figures indicate potential misinterpretations in recognizing the shapes of figures in the case of wrong preparation of tactile pictures. In the case of a small difference in the proportions of the length of the sides of the figure or a slight difference in the measures of the angles of the figure sides, we have noticed that most blind people fail to recognize its shape. Considerable progress in improving such perception can be achieved by increasing the proportions in the lengths of the sides and the measures of the angles between the sides of the figure. Further steps concern introducing an alternative audio description of the properties of the figure so as to improve the interpretability of the figure shape.

The line width should be taken into account not only in the tactile layer but also in the SVG layers of the picture structure, and as the results show, it is necessary to select it for a specific user. There is a tradeoff between the simplicity of interaction (tap detection) and the number of elements presented in the tactile picture. Large gaps between picture elements facilitate their detection but may prevent the presentation of complex content on a limited surface of a tactile picture. The use of SVG technology permits the arrangement of picture layers in a different order, which has an impact on the order of reading audio description. Elements for which audio descriptions are to be available should be in the upper layers of the drawing. For most users, the width of a virtual line in a digital drawing displayed on a tablet should be around 5 mm for a corresponding 1 mm tactile line. In this case, touching the screen with a finger next to the tactile line is interpreted as touching a wider virtual line, and the audio description is read for this event. However, in many occasions we noticed among the participants hands shaking or fingers trembling while touching the tactile image. We assumed that all those inconveniences could be the result of stress or current user condition. Therefore, it would be desirable to include the option of increasing or adjusting the line thickness for an individual user.

The configuration of the time intervals defining the user's gestures (two-taps, three-taps) depends mainly on the motor skills of the user's fingers and how the user is familiar with touch screens. The research implies that this interval should be about 340 ms. Our findings on the distribution of values measured empirically of the time interval in recognition of gestures in the examined group of users are practically the same as the results of other works [40]. Moreover, our tool enables individual selection of this value for people who may experience inconveniences resulting from hands shaking.

Applying audio descriptions to tactile picture elements improves the understanding and interpretation of the information presented on it. However, such a description must meet certain conditions to avoid a cognition load while listening. It must consist of a few simple commands, including the picture details, though, as the research has shown, the number of these details must be limited to a few. These results are similar to the results of other studies on cognitive load [25, 48]. Most participants in the test group accepted 5–10 seconds of voice prompts. Longer messages were incomprehensible, and details were hard to remember. In the developed solution, the audio description can be divided into two or three different descriptions available to the user after tapping two or three times on an image element. It allows the user to decide on the amount of contextual information needed about the touched tactile picture element.

Depending on the complexity of the image, it is not possible to present all the detailed information. Therefore, the research confirmed the usefulness of the additional functionality enabling the presentation of additional information describing the context of the picture. This facilitates the interpretation of the picture and supports the user in situations where he lacks the necessary knowledge to solve the exercise. A blind person does not have to take his hand away from the picture and reach for other educational materials (mathematics tables, textbooks, internet educational materials) to receive tips and necessary information to solve the exercise. Research has confirmed that blind people very often resorted to this form of support, and it made it easier for them to solve exercises.

Skip 6CONCLUSIONS Section

6 CONCLUSIONS

Blind people need more supportive educational tools and materials engaging various senses and it imposes the precise design and use of multimedia while presenting the graphics. The aim of the authors was to correctly present visual information in the tactile and audio form and to provide contextually selected information to reduce the existing cognitive barriers.

The results from this study suggest the need to select the image presentation parameters in various perception channels to the user's needs. The research attempted to show the typical values of various image presentation parameters and their ranges of values and indicated effective methods of their selection for a specific system user. This selection seems crucial from a practical point of view, as some participants revealed quick exhaustion, hand shakings, and low motivation and self-esteem resulting from previous negative experiences.

Due to the complexity of the issue and the number of potential factors affecting the results of our work, the authors did not opt to analyze the impact of an alternative method of presenting colors in the pictures. Usually, this is done by using various types of textures (small tactile triangles, circles, etc., placed inside the figure) for a limited number of primary colors. Also, the age of primary school students limited the current study, and its main focus was put on recognizing simple shapes of geometric figures. Further work should concentrate on extending the research conducted so far to the older age group and to focus on recognizing the properties of mathematical functions based on their graphs and images in the fields of physics, chemistry, biology, and geography. A separate issue, not addressed in the research, is also the recognition of shapes of 3D elements, and in the literature one can find such research using elements printed on 3D printers, new technologies applying cameras, and recognition of three-dimensional images.

Our research used static tactile pictures printed with a braille printer placed on the tablet screen. This is undoubtedly a significant pitfall in providing graphic information in the tactile form to the blind because each exercise consumes quite a lot of time to be prepared and printed as a tactile picture. Moreover, the user must change cards (sheets of paper) with pictures placed on the tablet. In the future, we plan to automate this process by introducing automatic recognition of a tactile picture by using NFC tags placed on the paper sheets. Some authors [4, 52], focus on dynamic touch screens, enabling one to present the images in low-resolution 32 × 48 to 60 × 120 pins. Nevertheless, there are some drawbacks to exploiting this solution, namely, the very high price of about 40,000 euro, relatively low resolution, and a small area of the depicted tactile image. These devices also do not support direct detection of the touched image elements, which does not allow for providing additional contextual information about their semantics.

It is also recommended that the next stage of the research, at the substantive level, should investigate the influence of the parameters of the multimedia presentation of a picture on the cognitive saturation, understanding the content, and use of the ability to pictures perception in other areas and everyday situations. The review of the market and literature serves as a continuous incentive for the authors to go deeper into the research and develop effective methods of presenting structured information for the blind, as they lead to increasing the ability of blind people to learn and acquire information on their own.

REFERENCES

  1. [1] Bradley Nicholas A. and Dunlop Mark D.. 2005. An experimental investigation into wayfinding directions for visually impaired people. Pers. Ubiquitous Comput. 9 (2005), 395403. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Aldrich Frances K. and Sheppard Linda. 2001. Tactile graphics in school education: Perspectives from pupils. Br. J. Vis. Impair. 19, 2 (2001), 6973. DOI: Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Baker Catherine M., Milne Lauren R., Scofield Jeffrey, Bennett Cynthia L., and Ladner Richard E.. 2014. Tactile graphics with a voice: Using QR codes to access text in tactile graphics. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’14), 7582. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Yacine Bellik and Celine Clavel . 2017. Geometrical shapes rendering on a dot-matrix display. Lecture Notes in Computer Science 10688 (2017), 818. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Bourne Rupert R. A., Steinmetz Jaimie D., and Saylan Mete. 2021. Causes of blindness and vision impairment in 2020 and trends over 30 years, and prevalence of avoidable blindness in relation to VISION 2020: The right to sight: An analysis for the global burden of disease study. Lancet Glob. Heal. 9, 2 (2021), e144e160. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Bowden Jocelyn L. and McNulty Penelope A.. 2013. Age-related changes in cutaneous sensation in the healthy human hand. Age (Omaha) 35, 4 (2013), 1077. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Bowers Lisa and Hayle Ryan. 2020. Creative haptics: An evaluation of a haptic tool for non-sighted and visually impaired design students, studying at a distance. Br. J. Vis. Impair. 39, 3 (2020), 214230. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Bradley Nicholas A. and Dunlop Mark D.. 2002. Understanding contextual interactions to design navigational context-aware applications. In Lecture Notes in Computer Science 2411 (2002), 349353. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Brayda Luca, Leo Fabrizio, Baccelliere Caterina, Ferrari Elisabetta, and Vigini Claudia. 2018. Updated tactile feedback with a pin array matrix helps blind people to reduce self-location errors. Micromachines 9, 7 (2018). DOI:Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Brzostek-Pawłowska Jolanta. 2019. Multimedia mathematical communication in a diverse group of students. J. Telecommun. Inf. Technol. 2 (2019), 92103. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Burdea Grigore C.. 2000. Haptics issues in virtual environments. In Proceedings of the International Conference on Computer Graphics (CGI 2000). 295302. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Chow Jason K., Palmeri Thomas J., and Gauthier Isabel. 2022. Haptic object recognition based on shape relates to visual object recognition ability. Psychol. Res. 86, 4 (2022), 12621273. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Dhaher Yaser and Clements Robert. 2017. A virtual haptic platform to assist seeing impaired learning: Proof of concept abstract. J. Blind. Innov. Res. 7, 2 (2017). DOI:Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Dufresne Aude, Odile Martial , and Christophe Ramstein . 1995. Multimodal user interface system for blind and “visually occupied” users: Ergonomic evaluation of the haptic and auditive dimensions. Human—Computer Interaction. Springer, 163168. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Van Duong Quang, Nguyen Vinh Phu, Luu Anh Tuan, and Choi Seung Tae. 2019. Audio-tactile skinny buttons for touch user interfaces. Sci. Rep. 9, 1 (2019), 13290. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Elgendy Mostafa, Guzsvinecz Tibor, and Sik-Lanyi Cecilia. 2019. Identification of markers in challenging conditions for people with visual impairment using convolutional neural network. Appl. Sci. 9, 23 (2019), 5110. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Garcia Grecia Garcia, Grau Ronald R., Aldrich Frances K., and Peter C.-H. Cheng . 2019. Multi-touch interaction data analysis system (MIDAS) for 2-D tactile display research. Behav. Res. Methods 52, 2 (2019), 813837. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Gori Monica, Cappagli Giulia, Baud-Bovy Gabriel, and Finocchietti Sara. 2017. Shape perception and navigation in blind adults. Front. Psychol. 8 (Jan. 2017), 10. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Götzelmann Timo. 2018. Visually augmented audio-tactile graphics for visually impaired people. ACM Trans. Access. Comput. 11, 2 (2018), 131. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Götzelmann Timo and Winkler Klaus. 2015. SmartTactMaps: A smartphone-based approach to support blind persons in exploring tactile maps. In Proceedings of the 8th ACM International Conference on Pervasive Technology Related to Assistive Environments (PETRA’15). DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Holloway Leona, Marriott Kim, and Butler Matthew. 2018. Accessible maps for the blind: Comparing 3D printed models with tactile graphics. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Jain Shekhar, Begrajka Deepak, and Pathak Siddharth. 2014. HCI guidelines for designing website for blinds. Int. J. Comput. Appl. 103 (2014), 2933. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Jehoel Sandra, Mccallum Don, Rowell Jonathan, and Ungar Simon. 2016. An empirical approach on the design of tactile maps and diagrams: The cognitive tactualization approach. Br. J. Vis. Impair. 24, 2 (2016), 6775. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Khusro Shah, Shah Babar, Khan Inayat, and Rahman Sumayya. 2022. Haptic feedback to assist blind people in indoor environment using vibration patterns. Sensors 22, 1 (2022). DOI:Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Kirschner Paul A.. 2002. Cognitive load theory: Implications of cognitive load theory on the design of learning. Learn. Instr. 12, 1 (2002), 110. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Klatzky Roberta L., Marston James R., Giudice Nicholas A., Golledge Reginald G., and Loomis Jack M.. 2006. Cognitive load of navigating without vision when guided by virtual sound versus spatial language. J. Exp. Psychol. Appl. 12, 4 (2006), 223232. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Kristjánsson Árni, Moldoveanu Alin, Jóhannesson Ómar I., Balan Oana, Spagnol Simone, Valgeirsdóttir Vigdís Vala, and Unnthorsson Rúnar. 2016. Designing sensory-substitution devices: Principles, pitfalls and potential1. Restor. Neurol. Neurosci. 34, 5 (2016), 769787. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Kuriakose Bineeth, Shrestha Raju, and Sandnes Frode Eika. 2020. Multimodal navigation systems for users with visual impairments—A review and analysis. Multimodal Technol. Interact. 4, 4 (2020), 73. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Gorlewicz Jenna L., Tennison Jennifer L., Palani Hari P., and Giudice Nicholas A.. 2019. The Graphical Access Challenge for People with Visual Impairments: Positions and Pathways Forward. IntechOpen. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Lin Ming C., Baxter William, Foskey Mark, Otaduy Miguel A., and Scheib Vincent. 2002. Haptic interaction for creative processes with simulated media. In Proceedings of the IEEE International Conference on Robotics and Automation. Vol. 1, 598604. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Lin Qing Wen, Hwang Sheue Ling, and Wang Jan Li. 2013. Establishing a cognitive map of public place for blind and visual impaired by using IVEO hands-on learning system. Lecture Notes in Computer Science 8006, Part 3 (2013), 193198. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Baker Catherine M., Milne Lauren R., Drapeau Ryan, Scofield Jeffrey, Bennett Cynthia L., and Ladner Richard E.. 2016. Tactile graphics with a voice. ACM Trans. Access. Comput. 8, 1 (2016). DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Maćkowski M., Brzoza P., Żabka M., and Spinczyk D.. 2018. Multimedia platform for mathematics’ interactive learning accessible to blind people. Multimed. Tools Appl. 77, 5 (2018). DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Maćkowski Michał, Brzoza Piotr, Kawulok Mateusz, and Knura Tomasz. 2022. Mobile e-Learning platform for audio-tactile graphics presentation. In Computers Helping People with Special Needs, Springer International Publishing, Cham, 8291.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Maćkowski Michał, Brzoza Piotr, Rojewska Katarzyna, and Spinczyk Dominik. 2019. Assessing the influence of the teaching method on cognitive aspects in the process of mathematical education among blind people. Adv. Intell. Syst. Comput. 1033 (2019), 211220. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Maćkowski Michał, Brzoza Piotr, Spinczyk Dominik, Meisel Rafał, and Bas Mateusz. 2020. Platform for math learning with audio-tactile graphics for visually impaired students. In Future Perspectives of AT, eAccessibility and eInclusion. 7582. Retrieved December 10, 2021 from https://www.icchp-aaate.org/sites/default/files/ED_1_Future_Perspectives.pdf.Google ScholarGoogle Scholar
  37. [37] Michał Sebastian Maćkowski , Piotr Franciszek Brzoza , and Dominik Roland Spinczyk . 2018. Tutoring math platform accessible for visually impaired people. Comput. Biol. Med. 95, (2018), 298306. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Maćkowski Michał, Żabka Marek, Kempa Wojciech, Rojewska Katarzyna, and Spinczyk Dominik. 2020. Computer aided math learning as a tool to assess and increase motivation in learning math by visually impaired students. Disabil. Rehabil. Assist. Technol. 17, 5 (2022), 559569. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] McLinden Mike. 2004. Haptic exploratory strategies and children who are blind and have additional disabilities. J. Vis. Impair. Blind. 98, 2 (2004), 99115.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Melfi Giuseppe, Müller Karin, Schwarz Thorsten, Jaworek Gerhard, and Stiefelhagen Rainer. 2020. Understanding what you feel: A mobile audio-tactile system for graphics used at schools with students with visual impairment. In Proceedings of the Conference on Human Factors in Computing Systems (CHI’20). DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Merabet Lotfi B. and Sánchez Jaime. 2016. Development of an audio-haptic virtual interface for navigation of large-scale environments for people who are blind. Lecture Notes in Computer Science 9739, 595606. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Metatla Oussama, Oldfield Alison, Ahmed Taimur, Vafeas Antonis, and Miglani Sunny. 2019. Voice user interfaces in schools: Co-designing for inclusion with visually-impaired and sighted pupils. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, 115. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. [43] Mikułowski Dariusz and Brzostek-Pawłowska Jolanta. 2020. Multi-sensual augmented reality in interactive accessible math tutoring system for flipped classroom. Intell. Tutoring Syst. 12149, (2020), 110. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. [44] Minhat Muzaireen, Abdullah Nasuha Lee, Idrus Rosnah, and Keikhosrokiani Pantea. 2017. TacTalk: Talking tactile map for the visually impaired. In Proceedings of the 8th International Conference on Information Technology (ICIT’17), 475481. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  45. [45] Morrison Gary R. and Anglin Gary J.. 2005. Research on cognitive load theory: Application to e-learning. Educ. Technol. Res. Dev. 53, 3 (2005), 94104. Retrieved 20 March 2023 from http://www.jstor.org/stable/30220445.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Nam Chang S., Li Yueqing, Yamaguchi Takehiko, and Smith-Jackson Tonya L.. 2012. Haptic user interfaces for the visually impaired: Implications for haptically enhanced science learning systems. Int. J. Hum.-Comput. Int. 28, 12 (2012), 784798. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Oumard Christina, Kreimeier Julian, and Götzelmann Timo. 2022. Pardon? An overview of the current state and requirements of voice user interfaces for blind and visually impaired users. Computers Helping People with Special Needs. Springer International Publishing, Cham, 388398.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Paas Fred and Sweller John. 2014. Implications of cognitive load theory for multimedia learning. Cambridge Handbook for Multimedia Learning (2nd Ed.). Cambridge University Press. 2742. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  49. [49] Pearl Cathy. 2016. Designing Voice user Interfaces: Principles of Conversational Experiences. O'Reilly Media, Inc.Google ScholarGoogle Scholar
  50. [50] Pigeon Caroline, Li Tong, Moreau Fabien, Pradel Gilbert, and Marin-Lamellet Claude. 2019. Cognitive load of walking in people who are blind: Subjective and objective measures for assessment. Gait Posture 67 (2019), 4349. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Porcheron Martin, Fischer Joel E., Reeves Stuart, and Sharples Sarah. 2018. Voice interfaces in everyday life. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI’18). ACM, New York, NY, 112. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. [52] Prescher Denise, Bornschein Jens, Köhlmann Wiebke, and Weber Gerhard. 2018. Touching graphical applications: Bimanual tactile interaction on the hyperbraille pin-matrix display. Univers. Access Inf. Soc. 17, 2 (2018), 391409. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Scheller Meike, Proulx Michael J., de Haan Michelle, Dahlmann-Noor Annegret, and Petrini Karin. 2021. Late- but not early-onset blindness impairs the development of audio-haptic multisensory integration. Dev. Sci. 24, 1 (2021), e13001. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Senette Caterina, Buzzi Maria Claudia, Buzzi Marina, Leporini Barbara, and Martusciello Loredana. 2013. Enriching graphic maps to enable multimodal interaction by blind people. Lecture Notes in Computer Science 8009, Part 1, 576583. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Senna Irene, Andres Elena, McKyton Ayelet, Ben-Zion Itay, Zohary Ehud, and Ernst Marc O.. 2021. Development of multisensory integration following prolonged early-onset visual deprivation. Curr. Biol. 31, 21 (2021), 48794885.e6. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Shinoda Moeka, Koike Akihiro, Teraguchi Sayaka, and Teshima Yoshinori. 2022. Development of tabletop models of internal organs for anatomy learning of the visually impaired. Computers Helping People with Special Needs, Springer International Publishing, Cham, 261269.Google ScholarGoogle Scholar
  57. [57] Silva Chathurika S. and Wimalaratne Prasad. 2016. Sensor fusion for visually impaired navigation in constrained spaces. In Proceedings of the 2016 IEEE International Conference on Information and Automation for Sustainability (ICIAfS’16). DOI:Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Theurel Anne, Frileux Stéphanie, Hatwell Yvette, and Gentaz Edouard. 2012. The haptic recognition of geometrical shapes in congenitally blind and blindfolded adolescents: Is there a haptic prototype effect? PLoS One 7, 6 (2012). DOI:Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Yu Wai and Brewster Stephen. 2003. Evaluation of multimodal graphs for blind people. Univers. Access Inf. Soc. 2, 2 (2003), 105124.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Zeng Limin and Weber Gerhard. 2016. Exploration of location-aware you-are-here maps on a pin-matrix display. IEEE Trans. Human-Machine Syst. 46, 1 (2016), 88100. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  61. [61] Blindness and Vision Impairment. Retrieved July 6, 2022 from https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment.Google ScholarGoogle Scholar
  62. [62] Guidelines and Standards for Tactile Graphics. Retrieved July 6, 2022 from http://www.brailleauthority.org/tg/.Google ScholarGoogle Scholar

Index Terms

  1. Multimodal Presentation of Interactive Audio-Tactile Graphics Supporting the Perception of Visual Information by Blind People

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Multimedia Computing, Communications, and Applications
          ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 19, Issue 5s
          October 2023
          280 pages
          ISSN:1551-6857
          EISSN:1551-6865
          DOI:10.1145/3599694
          • Editor:
          • Abdulmotaleb El Saddik
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 7 June 2023
          • Online AM: 2 March 2023
          • Accepted: 26 February 2023
          • Revised: 13 February 2023
          • Received: 21 July 2022
          Published in tomm Volume 19, Issue 5s

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!