Let It Snow: Designing Snowfall Experience in VR

We present Snow, a cross-modal interface that integrates cold and tactile stimuli in mid-air to create snowflakes and raindrops for VR experiences. Snow uses six Peltier packs and an ultrasound haptic display to create unique cold-tactile sensations for users to experience catching snowflakes and getting rained on their bare hands. Our approach considers humans' ability to identify tactile and cold stimuli without masking each other when projected onto the same location on their skin, making illusions of snowflakes and raindrops. We design both visual and haptic renderings to be tightly coupled to present snow melting and rain droplets for realistic visuo-tactile experiences. For multiple snowflakes and raindrops rendering, we propose an aggregated haptic scheme to simulate heavy snowfall and rainfall environments with many visual particles. The results show that the aggregated haptic rendering scheme demonstrates a more realistic experience than other schemes. We also confirm that our approach of providing cold-tactile cues enhances the user experiences in both conditions compared to other modality conditions.


INTRODUCTION
With continually evolving cutting-edge AR/VR technologies, haptics has become integral to bridging humans and VR experiences as a new dimension of VR.Researchers have explored the feasibility of using new technologies, such as mid-air haptics, thermal displays, and smart materials beyond the typical vibrotactile-based haptic actuators, recreating more comfortable and meaningful physical experiences with richer haptic feedback.People now demand even higher quality touch experiences with multisensory cutaneous feedback to experience more immersive and natural haptic experiences than ever.
Most of our daily life experiences are multisensory.We interact with the external world with multisensory feedback to learn and build practical knowledge and familiarity with interactions [56].Even for touch interactions, we simultaneously perceive multisensory cutaneous feedback.We feel rough but also feel dry.We feel wet, but at the same time, we also feel cold.We perceive multisensory cutaneous feedback of shapes, textures, stiffness, roughness, temperature, pressure, and humidity of the surrounding objects with our hands and body to understand the world around us.
Multisensory cutaneous integration to deliver a rich and meaningful haptic experience is important but also challenging.Different cutaneous sensory modalities should interplay to create a coherent and unified touch perception.Such sensory integration requires understanding individual cutaneous sensory properties and the capabilities of corresponding actuators to create interactions with a convincing multisensory experience.As our brain combines information from different cutaneous sensory modalities, it is essential to consider different aspects of touch and other skin-related sensations.
We design a user interface that simulates snowfall and raindrop experiences in VR through multisensory cutaneous integration in mid-air.Just like what we could experience in the physical world, we focus on creating natural touch experiences of catching snowflakes or feeling raindrops as if we had our memorable moments in the past.The ultimate goal of this study is to enhance VR experiences by creating a unique cross-modal user interface that combines multiple sensory cutaneous feedback of cold and tactile sensations.This interface aims to replicate natural phenomena such as snowflakes and raindrops, providing users with immersive and engaging experiences that mimic real-world sensations.By integrating cold sensations, pressure feedback, and haptic rendering, the study seeks to recreate everyday physical experiences like catching snowflakes or feeling raindrops.The overarching objective is to elevate VR experiences to a level where users cannot only see but also feel and interact with virtual environments in a way that closely resembles their experiences in the physical world.
We present Snow, a non-contact haptic interface that blends cold and tactile stimuli as integrated cues to deliver snowfall and rainfall experiences using a cold chamber and ultrasound mid-air haptic display.Our interface is designed based on the unified cold perception that simultaneously presents two stimuli, cold and tactile, being integrated and perceived as a unified signal without masking each other; they are detected and processed by mechanoreceptors and thermoreceptors, activating the somatosensory cortex and insula cortex, respectively.This approach of integrating two cutaneous sensory modalities can work together to deliver unique multisensory experiences together with immersive VR scenes.For visual and haptic rendering design, we consider the natural phenomenon of snowfall and rainfall and further adopt experience-based user research to recreate those physical experiences.For multi-point rendering for heavy snowfall and rainfall, we measure human thresholds of perceiving multiple particles (i.e., snowflakes and raindrops) and propose an adaptive, aggregated approach to address the limitations in creating a large number of ultrasound haptic cues.
In this paper, we propose a proof-of-concept interface that provides mid-air cold-tactile feedback by integrating a chamber with 6 Peltier packs to generate cold airflow and ultrasound haptic display for snowfall and rainfall VR experiences.We discuss how we design the interface and interaction techniques and show the feasibility of our approach, as well as the results of user experiences and several important findings.Our main contributions of this paper are 1) the feasibility of integrating multisensory cutaneous displays that offers greater user experiences, 2) the unique setup and approach that utilizes unified thermal perception to provide cold-tactile sensations, 3) a user-experienced-based design approach to creating an individual and multi-point visuo-haptic rendering of snowflakes and raindrops, and 4) a set of guidelines based on findings from user experience results that can further contribute to recreating other physical experiences.

RELATED WORKS 2.1 Multisensory Cutaneous Integration
Multisensory cutaneous integration refers to incorporating multiple sensory modalities, particularly tactile and touch-based inputs, to enhance the user experience and interaction.This approach aims to engage various touch senses simultaneously, providing a more immersive, intuitive, and effective interaction between users and digital devices or interfaces.The cutaneous afferent involves the reception of multiple external stimuli, such as pressure, vibration, roughness, and temperature, which are processed by the human perception system.The corresponding receptors perceive specific stimuli and transmit the signal to the brain for integration and processing, resulting in a sense of perception.For example, the warm and cold thermoreceptors detect the temperature change in the skin and convey it to the insula cortex.The four types of mechanoreceptors sensitive to different frequencies react to the cutaneous touch and vibration and send it to the somatosensory cortex [5,9].Some researchers have devoted effort to this field.Singhal et al. [49,50] proposed a thermo-tactile display to enhance user experience with a high haptic pattern identification rate.Son et al. [52] presented thermal sensation at the location of the tactile cue by interplaying thermal and tactile sensation simultaneously.Park et al. [42] proposed a handheld device capable of providing vibrotactile and impact sensation simultaneously through the combination of a voice-coil vibration actuator and an impact actuator.Yem and Kajimoto [61] created a finger glove device, employing a DC motor and electrodes to induce skin deformation and vibration.Benko et al. [3] proposed a controller that can render multiple degree-of-freedom force feedback and shape feedback with the platform composed of three servo motors and gears.These works reveal the benefit of integrating multisensory cutaneous sensations.

Cold Displays
The display interface to provide cold stimulus can be categorized into two types: contact-based and non-contactedbased.
Contact-based cold displays.The cold sensation is relatively more challenging due to the natural flow of thermal dynamics and limited approaches (only conduction and conviction can be applied) [18,38,59].Many contact-based cold displays use conduction-based Peltier devices attached directly or through the fabric on human skin to provide thermal feedback.Manasrah et al. [37] proposed a Peltier array strapped on the forearm to provide a cold sensation.Massimiliano et al. [13] used an aluminum plate attached to the Peltier to generate an asymmetrical thermal sensation.The Peltier device can reach the estimated temperature quickly while the hot side cooling efficiency could determine the cold side performance [10,44].Several approaches [18,20] adopted a water pipe on the forearm and finger to provide various temperatures.In their approach, the heat can be circulated smoothly due to the high thermal capacity and mobility of liquid.
Non-Contact-based cold displays.For those convection-based non-contact cold displays, airflow control is the fundamental implementation.Using multiple power fans [25], the participant could feel an ambient airflow environment by simply using multiple power fans.Xu et al. [59] introduced an airflow propulsion system moving compressed cold air through a vortex tube to provide cold sensations.Mohd Adili et al. [40] and Hokoyama et al. [23] combined the airflow with fog, and the water droplet evaporation process enhanced the cooling effect.Nakajima et al. [39] and Kamigaki et al. [30] presented the methods to provide cold feedback using acoustic streaming where the airflow was driven by the ultrasound wave.Although all the approaches showed great benefits and performance in providing cold sensations, they are limited in presenting localized cues in mid-air.

Mid-Air Haptics
Mid-air haptics provide tactile sensation without contact, providing better flexibility in 3D interaction.Several techniques have been proposed for mid-air haptics display, including air [19,51,53], laser radiation [32], and ultrasound haptic display using transducer array [6,27].Compared with other techniques, ultrasound haptic display is actively investigated in many research fields and applications [6,15,17,22,26,31,36,47,54,60,62] due to its advantages of better scalability and ability to control the cues in 3D space [6,16,28,45].Mid-air haptics is based on the algorithm that creates a focused pressure point in 3D space using multiple ultrasound transducers, providing a focused point on a fixed location through the in-phase ultrasound waves with specific modulation frequency [16,27].The optimized algorithm has seen promising growth in recent years and can render multi-focal points in mid-air with a relatively high spatial resolution (1 cm) [6,24,36,41,58].A recent study discovered that acoustic energy concentrated at the focal point can be converted into heat, offering thermal feedback, as noted in Wang et al. [57].This finding broadens the scope of mid-air haptics applications.

INTERFACE DESIGN
We designed and implemented a mid-air tangible user interface to provide cold-tactile cues using a cold chamber and an ultrasound haptic display.Our interface design considers cold perception on integrated two modalities of tactile and cold senses to deliver cold-tactile feedback of snowflakes and raindrops.

Principle
The basic mechanism in our approach is to leverage humans' ability to identify both tactile and cold stimuli without masking each other when they are projected onto the same location on their skin [49].In thermal perception, the thermoreceptors in humans' skin detect the temperature changes and carry that information to the insula cortex in our brain [5,9].However, the mechanoreceptors detect the tactile sensation and are processed in the somatosensory cortex, which is a different part of the brain.Yet, it has not clearly proved how the simultaneous presentation of both stimuli is perceived as an individual signal [4,7,46]; it has been proved that humans can perceive two modalities of tactile and thermal cues without masking each other in mid-air [49].With this principle of thermo-tactile presentation, we designed a user interface that generates cold airflow to create a chilly atmosphere in the interaction space and acoustic pressure points in the same space using an ultrasound haptic display.This unique integration can create a unified cold perception of the snowflake and raindrop sensations.

Design and Implementation
Figure 2 shows our prototype interface.The main body is a hexagon-shaped chamber with 30 aluminum beams for the frame of the structure and insulation foams (R-TECH insulation panel) for the surface cover.The chamber surface is further covered with aluminum foil to retain cold temperatures inside the chamber.The chamber has the shape of two hexagon-shaped conical frustas (i.e., top and bottom), with each having a height of 100 mm connected by sharing a hexagon surface with an edge length of 240 mm.The bottom of the chamber is open, with an edge length of 145 mm, to allow retained cold airflow to travel down to the hand interaction space.The chamber's shape was chosen to meet specific criteria: it should accommodate multiple device installations on various surfaces, the outlet surface must align with the hand's area, and the airflow speed must remain low to prevent any interference with sensory sensations.The chamber is supported by four legs (AboveTEK TS-398W), which are attached symmetrically to the center of the chamber, with each leg having three joins to adjust the height of the entire system.The volume of the chamber is 0.021  3 .We installed six Peltier packs (Walfront B07H6LK1W7) and axial fans (Noctua NF-A6x25) at the center of each hexagon surface in the upper part of the chamber, facing inwards to the inside.The Peltier cooling system was chosen as the primary cooling source due to its rapid response time and compact size, which offers advantages over traditional compressor-based air conditioners [29].The number and placement of the Peltier pack were chosen based on the computational fluid dynamics (CFD) simulation.The power for each Peltier pack comes from an adjustable voltage power supply (Xunba SD500-0-48) with a range of 0-12V.Each pack includes four Peltier plates and two water tanks, directly linked to the hot side of the Peltier plate using thermal paste.This paste facilitates efficient heat transfer.The maximum power consumption for each Peltier plate is 72W.
The axial fan is placed at the back of each Peltier pack to provide airflow.The fan is controlled by PWM (Pulse Width Modulation) with a speed range from 0 to 3000 revolutions per minute (RPM), providing airflow up to 29.2  3 /ℎ.The power consumption of the axial fan is 0.96W.Peltier plate transfers heat from two sides by consuming electrical energy, resulting in one cold and one hot side.We designed the Peltier cooling system that consists of a total of six water pumps (max flow rate: 300/ , Syscooling SC-300T) and six radiators (500 W heat remove ability, Clyxgs B07PG98KD4), having one water pump and one radiator per Peltier pack, as shown in Figure 2. The power consumption of the pump is 4W.The radiator and pump were selected based on the Peltier heating effect to dissipate heat promptly.The temperature of the hot side can be transferred to the water tank, and the cold side functions as a cold flux source to cool down the hot surface of the Peltier plate.
Inside the chamber, one ultrasound haptic display (STRATOS Explore, Ultraleap, UK) is placed at the top, facing downwards.The distance between the ultrasound display and the bottom of the chamber (open space) is 165 mm.The radiation force at the focal point 200 mm apart from the board was 1.38 mN [35].A hand-tracking device (Stereo IR 170, Ultraleap, UK) is mounted at the end of the bottom structure, facing down, tracking the user's hand movement in the interaction space, where the user can interact with virtual snowflakes and raindrops with ultrasound cues and cold sensations.This interaction space is defined as a 3D space having 200 mm (length) × 200 mm (width) × 70 mm (height) from the bottom of the open space of the chamber to the top of the table where the interface is placed.The total weight of the complete interface is 15.1 kg.The total power consumption of the interface is 2600W.
When all six Peltier devices are activated, the Peltier plate transfers heat between the hot and cold sides of the plate.The axial fan attached to the Peltier pack is operated with a speed of 900 RPM to blow in the airflow to pass through the cold sides of the plates, generating cold airflow inside the chamber.We kept the fan's speed at a constant 900 RPM after testing various values ranging from 0 to 3000 RPM (see Section 3.3).The cooling system keeps transferring the heat generated from the hot side of the plates to the water tank to release the heat through the radiator to maintain the good working condition of the Peltier plate.After the cold air fills the entire chamber, the temperature of the interaction space decreases through the thermal exchange, making a cold ambient environment in the interaction space.Since the speed of the cold airflow is low due to the low fan speed, ultrasound haptic cues could be successfully perceived without interference with cold airflow.As the ultrasound haptic display can provide localized tactile feedback by creating a focused pressure point with the control of modulated frequency, phase shift, and amplitudes, the user can feel various haptic sensations in a cold environment, perceiving unified cold-tactile sensations like snowflakes.
We validated our design with the finite element simulation using ANSYS to simulate the cold Peltier source, airflow, and temperature distribution of the interface.The model contains the same structure, components, and dimensions as the proposed design (open-bottom chamber, six Peltier packs with fans).The thermal simulation has been assessed with ANSYS Fluent with optimized boundary conditions based on the physical properties of materials and thermal properties of the device material used in the prototype.The rated cool source power per Peltier was set to 280W, and the axial fan speed was set to 900 RPM.Our simulation result shows that the contact region temperature would be lowered to 10 • C from 22 • C in 100 seconds, which shows an efficient cooling process, given the large chamber volume.

Interface Performance
Four tests were conducted to assess the performance of our Snow interface.We evaluated different fan speed levels to find the optimal fan speed.We also assessed the temperature controllability of the interface to switch from one temperature level to another.We further evaluated temperature stability to see how our interface can maintain stable temperatures for an extended period of time.Finally, we measured the temperature distribution in the interaction space.Fan Speed. Figure 3 (a) shows the performance of the axial fan speed, ranging from 500 to 1300 RPM with an interval of 200 RPM.The fan speed levels under 500 RPM were not selected as the Peltier pack began freezing, creating ice to block the airflow.The fan speed levels above 1300 RPM couldn't reach lower temperature levels.The fan speed levels over 2000 RPM can interfere with ultrasound haptic cues.The test was done with a fixed supply voltage of 12 V and the temperature sensor (DS18B20) was placed at the top center of the interaction Temperature Stability.The temperature stability test was conducted to evaluate whether the interface can consistently maintain the temperature for an extended period of time.We activated the system from room temperature at a specific voltage level (i.e., 12V, 8V, 6V, and 4V) to maintain the temperature level at 9.5, 11.5, 13.5, and 15.5 • C, respectively.We set the optimal fan speed at 900 RPM.The airflow rate in the interaction zone remained below 0.5m/s, ensuring it did not disrupt the sensory experience.Figure 3 (b) shows the temperature changes at each temperature level for 600 seconds once it reaches thermal equilibrium.It shows that the mean temperatures at each level were maintained at the dedicated temperature levels within acceptable boundaries.
Temperature Control.The result shows that it takes 40s, 72s, and 122s at each interval from the snow temperature level.On the contrary, Figure 3 (d) shows the time from the temperature level for snow (9.5 • C) up to the temperature level for rain (15.5 • C), at an interval of 2 • C (i.e., 9.5, 11.5, 13.5, and 15.5 • C).The result shows that it takes 55s, 80s, and 140s at each interval from the rain temperature level.
Temperature Distribution.We used the custom measurement setup to measure and examine the temperature distribution in the interaction space.The measurement setup consists of three layers of the temperature sensor arrays (i.e., top, middle, and bottom) and a support aluminum frame (a hollow cube-shaped structure with a size of 200 × 200 × 200 mm of width, length, and height, respectively).The temperature sensor array in each layer is constructed with 3 × 3 temperature sensors with an interval of 70 mm apart.Three layers are installed at different heights of the support frame with an interval of 70 mm.We placed the measurement setup in the interaction space and measured the temperature data.Figure 4 shows the temperature distribution with interpolation at 100s, 200s, and 300s after the interface reaches thermal equilibrium.The result shows that the temperature is symmetrically distributed on each measurement plane at each time stamp.

INTERACTION DESIGN
Our interaction design approach is based on multisensory physical experiences by understanding the expectations derived from previous interactions with the world [56].We adopted user interviews on previous physical experiences with snowfall and rainfall as essential rendering parameters in visual and haptic rendering designs.We further considered a natural phenomenon to capture the key properties to be reflected in our design.We also tried to balance between technical limitations (e.g., limit in generating multiple focal points in the mid-air haptic display) and human perception (e.g., perception thresholds to perceive a large number of haptic cues), focusing on how we can deliver an immersive and memorable VR user experience.

Snow
Snowfall is a naturally occurring weather phenomenon when tiny ice crystals in clouds stick together and become heavy enough to fall to the ground.In a typical snowfall, the snow particles are 1 mm in diameter with a median radius of 0.65 mm [14].They fall at speeds between 1 and 2 m/s with an average variation of up to 0.7 m/s [1].Snowflakes are extremely light and deliver a lower amount of force when they collide with the hand.A preliminary study was conducted with these parameters to obtain initial results.Due to the limitation in creating a focal point of 1 mm with our ultrasound haptic display, we used the smallest possible cue size of 8.5 mm.The results showed that the participants were not satisfied with the experience provided.Some participants mentioned that the visual particle size was too small to focus on or the haptic cue was too weak to perceive.
We then asked participants to describe the feeling of snowfall based on their experiences, mainly in terms of what they remember seeing, hearing, and feeling.Most of them said that they remember smaller particles falling slowly, which provided softer feedback when they landed on the palm.After that, those particles stay on the palm for a brief duration and melt, which makes them feel cold at that location.Based on our findings, we designed visual and haptic renderings of snowfall.
Visual Rendering.The snow particles are generated using an interactive particle system.This particle system generates snowflakes that fall in users' interaction space so that users can extend and reach out their hands to interact with these particles.Six particle emitters in the particle system are designed to generate snowflakes, targeting five for individual fingers and one for the palm.Each emitter is capable of controlling the emission rate of how many snowflakes to generate per second.By default, we set particle emitters to have a ratio of 70% of the snowflakes on the palm and 30% on the fingers.The size of particles is randomly varied in diameter from 1.8 to 2.2 mm to enhance the visual experiences.We decided to simulate a slow snowfall with a fall speed of 0.24 m/s so that users could catch the falling particles and process the impact of individual flakes.The snowflakes are also provided a small velocity chosen randomly between -1 m/s and 1 m/s along the X and Z axis to induce the natural falling of snowflakes.Due to temperature differences, wet snow particles melt within a short time after falling on our hands.We render this phenomenon visually by gradually decreasing the size of the particles until they vanish over 1.5 seconds.
Haptic Rendering.Based on the results from our preliminary study, we designed two types of haptic rendering schemes: snowflake melting sensation and multiple snowflakes sensation.
Snowflake Melting Sensation.The snowflake melting sensation is designed to simulate an individual snowflake melting sensation on the palm.This haptic sensation is rendered using a circular cue of 20 mm in diameter at a frequency of 120 Hz, and its intensity decays over time to resemble the melting sensation.The sensation starts at the ultrasound haptic display's maximum intensity of 100% (3.94 mN) but decays non-linearly to 0 for 1.5 seconds.We compared this melting sensation scheme with two static rendering schemes where the intensity was kept constant at 50% and 100%, respectively.The result from the preliminary study confirmed that people preferred the melting sensation over both static schemes.One participant mentioned, "This condition was more realistic because the snow melting actually matched the feeling of touch I got." Multiple Snowflakes Sensation.The ultrasound haptic display supports multiple, simultaneous haptic sensations through the use of multiple control groups; however, the performance of the haptic display may degrade if more than two control groups are used [55].We proposed two rendering schemes to address this limitation: i) Point-based and ii) Aggregation (see Figure 5).Both rendering schemes used a snowflake melting sensation with a circular cue of 20 mm in diameter at a frequency of 120 Hz, and its intensity decays for 1.5 seconds.
In the point-based scheme, we provided a haptic sensation at the exact point where a visual particle lands on the palm.We used the first-in-first-out approach so that if a particle lands on the user's hands while the maximum number of sensations are already generated, we stopped generating the oldest sensation and shifted it to the new one (even before the duration of 1.5 seconds).We found that presenting more than three simultaneous focal points may decrease the intensity of individual points to a level where they were no longer easily distinguished.Based on our initial findings, we set the limit of concurrent haptic points to three simultaneous points (i.e., 1 point, 2 points, and 3 points).
The aggregated rendering scheme is designed to address the limitations of the point-based scheme.In this scheme, we created larger haptic cues for multiple points landed on a threshold distance.For example, when the first snowflake is landed on a palm, it is rendered as a point-based scheme.When the second one landed within the threshold distance (20 mm) of the first flake, the centroid is updated to the point between those two flakes, and the original sensation is replaced with a larger sensation (diameter = 40 mm) with the updated centroid to represents both snowflakes.It updates whenever the snowflakes are melted (1.5 seconds) or a new snowflake lands within the threshold distance.This scheme can be helpful for heavy snowfall with faster speed as multiple flakes can land on the limited space of the palm.

Rain
Rainfall occurs when the clouds become saturated or filled with tiny water droplets.These water droplets combine to form heavier water droplets that fall to the Earth as rain.Raindrops can reach sizes of 5-6 mm, and fall speeds of 0.7-2 m/s for light rainfall, 2-9 m/s for medium rainfall, and 9-13 m/s for heavy rainfall, respectively [2,8].Raindrops are heavier than snowflakes and provide comparatively strong force when they collide with the palm.
Our preliminary experiment with those parameters showed that participants could not perceive the rainfall effect due to the extremely high fall speed and the cue size being too small.Based on user interview results, we found that when people think of rainfall, they think of fast-falling water droplets, which provide a strong impact when they hit the hand.They expected the raindrops to splash and expand after hitting the palm and stay on the palm for a longer duration, providing the feeling of slight coldness and wetness.
Visual Rendering.Similar to the visual rendering of snowfall, an interactive particle system is designed to have raindrop emitters for each finger and the palm.We initially set 70% of the ratio for the palm and 30% for the fingers.The raindrop fall speed was set from 0.2. to 0.7 m/s, allowing users more time to check the raindrops' landing on their palms visually.When each raindrop hits the palm, a short splash animation is created at the contact point to show where the raindrop landed clearly.
Haptic Rendering.Based on the preliminary study results for raindrops, we designed two haptic rendering schemes: raindrop splash sensation and multiple raindrop sensation.
Raindrop Splash Sensation.The rendering scheme for rain consisted of drawing a circular sensation rendered 120 times per second (120 Hz) centered at the point where the raindrop falls for 3 seconds.The diameter of this sensation increased over time from 0 to 40 mm.This was done to simulate the effect of raindrops hitting the palm and expanding, as described above.The intensity of this sensation is kept constant throughout to a 100%.This splash sensation was compared with two static rendering schemes where the diameter was fixed to 20 mm and 40 mm, respectively.Participants preferred the splash sensation over the others, and they felt more realistic and clear.
Multiple Raindrops Rendering.The method of rendering multiple simultaneous raindrops follows the same two rendering schemes: i) Point-based and ii) Aggregation.For the point-based rendering scheme, we support all three variations of 1, 2, and 3 points as same as snowflakes.In the aggregated rendering scheme with multiple raindrops, we use the larger size of the haptic cues (diameter = 60 mm) with 30 mm of threshold distance.The rest of the mechanism is the same as the aggregated rendering scheme for snowflakes.

Virtual Environment
The virtual environment for both scenes is carefully designed for immersive VR experiences.Figure 6 (a) shows a snow scene where a user is surrounded by snowfall.The user can see their virtual hands in the scene when they extend their real hands forward (see Figure 6 (b) and (c)).Two hand models were provided to represent male and female participants' hands.Background snowflakes were generated using a particle system that creates snowflake particles outside users' reach.These snowflake particles are identical to the particles falling near the user's hand, and the amount of this background snowfall changes dynamically based on the emission rate for the user's hand.Audio cues of blowing wind sound are added to enhance the user experience.Unlike snowfall scenes, the rain scene requires more dynamic weather changes.A light background rain is employed when the emission rate of interacting particles is less than nine drops per second.In addition to having a proportional number of raindrops, it supported minimal background effects like occasional lightning and thunder.A medium rainfall background is displayed when the emission rate is less than 17 drops per second.The medium rainfall background supports more thunderclouds, heavier lightning, and a little accumulated groundwater visible with splashes and ripples.Finally, if the rate of drop emission goes above 17 per second, a heavy rainfall background is displayed.The heavy rain renders fast falling, longer raindrops, thunderclouds, and a substantial amount of accumulated groundwater with more frequent splashes and ripples.These parameters were selected based on a preliminary study.The sound of raindrops is provided to enhance the realism of the multisensory experiences.

EVALUATION 1: HAPTIC IDENTIFICATION IN COLD FEEDBACK
This experiment aims to investigate the effect of cold temperature on the human's ability to identify vibrotactile patterns in mid-air.We measured the identification rate to evaluate the impact of cold stimuli on the intensity, size, and location of the ultrasound haptic cues on a human's palm.We hypothesize that the introduction of cold feedback will not hinder user's capacity to discern tactile intensity, size, and location on their palms.

Participants
Twelve participants (1 female; mean age = 25.25 years old; SD = 3.13) were recruited for this experiment.All participants were right-handed, and none of them reported any disorder affecting the sensations on their hands.Five of them reported being familiar with VR, and none of them reported previous experience with haptics.Each participant was paid with a $10 gift card for their participation.All the participants gave written informed consent to participate in this study.All the experiments were approved by the Institutional Review Board (IRB) from the authors' institution.

Apparatus
We used our prototype interface described in Section 3. The ultrasound array was connected to the main computer system (Lenovo Legion 5 with GeForce RTX 2060, Intel i7 1070H, and 16GB RAM) to operate the experiment.This experiment did not require any VR headset, as the study only focused on identifying haptic patterns through different temperature levels.Visual feedback was not provided.Temperature conditions were controlled by regulating the input values of each Peltier and hand tracking was used so that experimenters could target specific areas of the hand for TaskLocation.

Experimental Design
We designed a within-subject study consisting of three tasks: TaskIntensity, TaskSize, TaskLocation, each targeting different aspects of vibrotactile cues across three temperature conditions: i) room temperature at 22 • C, ii) cool temperature at 15.5 • C, and iii) cold temperature at 9.5 • C. Additionally, we varied the duration of the stimuli to two options: i) 1.5 seconds and ii) 3 seconds.In each trial, participants were tasked with identifying the presented stimuli, and their accuracy was assessed.Thus, each task involves three independent variables: the task parameter (intensity, size, or location), duration, and temperature levels, with identification accuracy serving as the dependent variable.
TaskIntensity.This task was designed to verify if cold stimuli interfered with users' ability to perceive a change in the intensity of mid-air haptic sensations over time.This task included three haptic patterns provided on the center of the palm: i) Increasing (an increase in cue intensity from 0% to 100%), ii) Decreasing (a decrease in intensity from 100% to 0%), and iii) Constant (a constant maximum intensity of 100%) shown in Figure 7 (a).Each of these patterns was rendered as a circle with a diameter of 40 mm and rendered at a frequency of 120 Hz.These patterns were played for two duration conditions of 1.5 seconds or 3 seconds.These parameters were selected based on snow and rain haptic rendering as described in Section 4. Participants were only asked to state whether the intensity of the cue increased, decreased, or remained constant over the duration.This was repeated 5 times, yielding a total of 90 trials (3 intensity scenarios × 2 duration × 3 temperature levels × 5 repetitions).The order of trials within the task was randomized using Latin square.
TaskSize.Similar to TaskIntensity, this task presented three haptic patterns on the center of the palm.These were: i) Increasing (an increase in cue diameter from 0 mm to 40 mm), ii) Decreasing (a decrease in diameter from 40 mm to 0 mm), and iii) Constant (a constant diameter of 40 mm) as shown in Figure 7 (b).These patterns were played over either 1.5 or 3 seconds, and participants were only asked to state whether the size of the cue increased, decreased, or remained constant over the duration.The intensity and frequency were set to 100% and 120 Hz, respectively.There were a total of 90 trials (3 size scenarios × 2 duration × 3 temperature levels × 5 repetitions).The order of the trials within the task was randomized using Latin Square.
TaskLocation.This task was designed to verify that the introduction of cold stimuli does not interfere with users' ability to distinguish the location of the hand when a mid-air haptic sensation is provided.In this task, a mid-air haptic stimulus was provided, and the participant was asked to identify where the stimulus was felt on the hand.There were nine stimulus locations (one on each of the five fingers and one on each of the four corners of the palm), as shown in Figure 7(c).The targeted locations on the palm were: Upper Left, Upper Right, Lower Left, and Lower Right.The haptic cues were rendered as circles with a diameter of 40 mm and drawn at 120 Hz with 100% intensity.Participants were told ahead of time what the possible locations could be.At each location, a stimulus could be delivered for either 1.5 or 3 seconds.Each combination of location and duration was presented twice throughout the task.In total there were 108 trials (9 location scenarios × 2 duration × 3 temperature levels × 2 repetitions).The order of the trials within the task was randomized.

Procedure
Participants were briefed about the experiment and asked to read and sign a consent form.At the beginning of each task, they were asked to participate in a practice session to become familiar with the different mid-air haptic patterns presented in each task.During the practice and task sessions, participants were asked to place their dominant hand under the apparatus in the interaction zone with the palm facing upward (towards the haptic display).The participants were instructed to keep their hands still in position.The prototype was arbitrarily set to one of the temperature conditions, and then the three tasks were presented in random order before changing the temperature condition.This was done to minimize the time spent changing the temperature between tasks.
For each trial, the experimenter asked the participant to identify the cue according to the task and recorded their response on a computer.Participants could request a trial to be re-tried before answering.Participants were given a 2-minute break between temperature conditions.The entire experiment lasted for one hour.

Results and Discussions
Figure 8, 9, and 10 show the normalized confusion matrices presenting the mean pattern identification accuracy across participants for all three tasks in all three temperature conditions.The first column of confusion matrices denotes the room temperature condition, the second column denotes the cool temperature condition, and the third column denotes the cold temperature condition.The mean identification accuracy for TaskIntensity was 91.3% (room), 89.0% (cool), and 91.3% (cold), respectively.And for TaskSize was 91.0% (room), 93.0% (cool), and 92.6% (cold), respectively.And for TaskLocation was 94.5% (room), 94.3% (cool), and 94.3% (cold), respectively.We used a two-way repeated-measures ANOVA to evaluate the effect of temperature and duration on the tasks.Kolmogorov-Smirnov (K-S) test and Levene test were conducted to ensure the normality and homogeneity of the data.We confirmed that the difference in temperature condition was not significant across the three tasks: TaskIntensity (=0.4342),TaskSize (=0.6429),and TaskLocation (=0.5486).
This demonstrates that haptic patterns presented in mid-air are perceivable with a high accuracy rate irrespective of the temperature conditions.This further verifies our hypothesis and confirms that cold stimuli will not hinder the user's experience when we provide melting and splash haptic rendering for Snow and Rain, respectively, at those temperatures.

EVALUATION 2: MULTI-POINT THRESHOLD
While testing the design of our haptic rendering algorithms, we noticed that the relationship between the number of points visible on the user's virtual hand and the number of corresponding haptic points generated on the user's real hand does not always follow a 1:1 mapping.We observed a trend where a greater number of visual points could be rendered by using a lesser number of haptic points.This was possible because the 2-point discrimination threshold is about 1 cm for palm [55], and if multiple visual points land within this distance, one haptic cue might be enough to provide feedback corresponding to them all without breaking immersion.That is, the number of haptic points to be rendered under different particle emission rates would maintain congruence.In this experiment, we calculate the number of visual points (threshold) at which users detected a mismatch between haptics and visuals for four rendering schemes (1P, 2P, 3P, and Aggregated) described in Section 4 for each scene.We hypothesize that a substantial quantity of visual points can be effectively represented by a reduced number of haptic points without compromising immersion, particularly when the visual points are closely clustered.

Participants
Twenty-two participants (5 females, 2 non-binary; mean age = 20.91 years old; SD = 2.83) were recruited for this experiment.All of these participants were right-handed, and none of them reported any disorder affecting the sensations on their hands.Nine of them reported being familiar with VR, mainly through playing VR games, and four of them had previous experience with haptics.Each participant was paid with a $10 gift card for their participation.All the participants gave written informed consent to participate in the study.All the experiments were approved by the Institutional Review Board (IRB) from the authors' institution.

Experimental Design
We used a psycho-physical method called the "two-down and one-up staircase method" that adapts to the participant's threshold level with an estimation of the 70.7% point of the psychometric function [33,34,48].For every participant, a threshold was estimated for each combination of the scene (snow or rain) and haptic rendering scheme (1P, 2P, 3P, or Aggregated).In total, thresholds for eight conditions (2 scenes × 4 haptic rendering schemes) were estimated per participant.This was followed by a preference study, in which these rendering schemes were tested for user preference in two conditions for each scene.These conditions were, i) Light Snow, ii) Heavy Snow, iii) Light Rain, and iv) Heavy Rain.Light snow conditions rendered a total of 5 snowflakes per second on the users' palms, while the heavy snow had an emission rate of 20 flakes per second.The emission rates for light and heavy rain were set to 8 and 25 drops per second, respectively.These numbers were chosen after internal testing.The trial conditions were conducted in random order, and the order of scene and haptic rendering scheme groups were balanced.This experiment has two independent variables: i) VR scene with corresponding temperature level and ii) haptic rendering scheme, and two dependent variables: i) rendering scheme threshold and ii) preference rating.

Procedure
The participant was given a brief description of the prototype and procedure before beginning the experiment.The participant was asked to wear the VR headset and place their dominant hand under the chamber in the interaction zone with their palm facing upward.The experimenter proceeded to conduct calibration to synchronize users' physical hands with their virtual counterparts.This calibration process involved utilizing two stationary pivot points in both the physical and virtual environments, aligning the position and orientation of the virtual scene to overlay the virtual and physical pivot locations accurately.During the experiment, the participant was placed in a virtual scene and was asked to interact with it by extending their hand.After exposing their palm for ten seconds, participants were required to answer a two-alternative forced choice (2AFC) question with either "Yes" or "No, " depending on whether they perceived a mismatch between what they saw in VR and what they felt in their hands.The experimenter then entered this response on a computer.
The starting precipitation rate was 1 particle per second for all conditions.We adjusted the precipitation rate by a step size of 1 particle per second based on the response from the participant.The initial step size was set to two precipitation per second until they changed their answer for the first time (i.e., first reversal).Then it would be decreased by one if they responded with "Yes" and increased by one if they responded with "No".The task was continued until six reversals were reached.The six reversals were then averaged for an estimated threshold.After the task, the participants were asked to fill up the part of the questionnaire pertaining to their task.For each task, they were asked to describe their experience in the scenario and what made them feel the mismatch.This task was repeated a total of eight times for each combination of scene (snow or rain) and haptic rendering scheme (1P, 2P, 3P, or Aggregated).Each task took approximately 6 minutes, including filling out the questionnaire.This was followed by a preference study where participants were asked to try all four rendering schemes with each visual condition: Light Rain, Heavy Rain, Light Snow, and Heavy Snow as shown in Figure 6.Participants were required to rate them in the order of best-matched scheme to least-matched scheme.The preference task took approximately 10 minutes to finish, thus yielding the total duration of the experiment to be 60 minutes.

Results and Discussion
The mean thresholds for each rendering scheme for both scenes are shown in Figure 11.The mean threshold for the snow condition was found to be 5.3, 9.1, 13.7, and 21.8 emissions per second for 1P, 2P, 3P, and Aggregated respectively.The thresholds for rain conditions were 11.2, 15.4, 20.8, and 26.3 emissions per second for 1P, 2P, 3P, and Aggregated respectively.It can be seen that users were less likely to report a mismatch at higher emission rates if more simultaneous haptic points were presented.We used a 2-way ANOVA with repeated measures to evaluate the effect of the haptic rendering scheme and visual scene on the perceived threshold and both factors were found significant ( < 0.01).
Figure 12 (a) and (b) shows the preference ratings for each rendering scheme across the 4 visual conditions.It can be seen that participants felt that the 1P condition matched well when the rate of emission of particles falling on the hand was low.Participants said that the incoming particles.They also mentioned that they could feel the haptic cue moving as the new particle hits their palm thus, providing them assurance on top of the visual cue.On the other hand, participants liked aggregated for heavy precipitation scenarios.When the number of particles falling on the palm was larger, they felt that the point-based schemes were not able to do justice to the visuals.They felt a great deal of mismatch between the haptic feedback provided and the visuals presented to them.Aggregated on the other hand provides a larger, more general haptic feedback covering bigger areas, and the mismatch was minimized.

EVALUATION 3: USER EXPERIENCES
This experiment explores the effect of cold temperature combined with mid-air haptic feedback in VR applications by comparing it with the effects of other modalities.We designed two VR scenes that include visuals and audio feedback for users to experience in four haptic feedback conditions.We hypothesize that the provision of multimodal feedback, incorporating a synthesis of auditory, visual, cold, and tactile cues, will culminate in an elevated user experience.

Participants
Sixteen participants (7 females; mean age = 21.2 years old; SD = 4.7) were recruited for this experiment.One participant was left-handed, and the rest of them were right-handed and none of them reported any disorder affecting the sensations of their hands.Eleven of them reported being familiar with VR and only one participant was familiar with haptics.Each participant was paid with a $10 gift card for their participation.All the participants gave written informed consent to participate in the study.All the experiments were approved by the Institutional Review Board (IRB) from the authors' institution.

Experimental Design
A within-subject experiment focusing on four feedback modality conditions was conducted: i) No Feedback, ii) Tactile Feedback, iii) Cold Feedback, and iv) Cold-Tactile Feedback.The audio and visual modalities were kept consistent throughout the conditions.No Feedback is a condition that does not provide any haptic feedback.In this feedback condition, there was no mid-air haptic feedback or cold feedback presented.In Tactile Feedback, only ultrasound mid-air haptic feedback was presented without activating the cold feedback.In Cold Feedback, only cold feedback was presented without providing ultrasound mid-air haptic feedback.Finally, Cold-Tactile Feedback provided both ultrasound mid-air haptic feedback and cold feedback.We used both rain and snow scenes for this experiment.For each condition, the amount of precipitation increased from 1 to 30 particles per second in 30 seconds.We designed an adaptive haptic rendering algorithm that automatically changes the rendering scheme based on the threshold calculated in Experiment 2. Trial orders were randomized, and the order of four modality condition groups was balanced among participants.Questionnaire sheets were prepared to measure the participants' responses for their interaction experiences in three areas: Immersion -The feedback condition was immersive, and it kept me engaged while interacting with the scene; Enjoyment -I enjoyed interacting with the scene in this feedback condition; Overall Satisfaction -I felt that the feedback condition was satisfying.Participants were asked to respond to each question by marking a check on a horizontal line (visual analog scale) with a label on each end: 'Strongly Disagree' and 'Strongly Agree' per feedback condition.Thus, this experiment resulted in having one independent variable: modality, and three dependent variables: immersive rate, enjoyment rate, and satisfaction rate.

Procedure
Instructions were given to each participant with a short training session to become familiarized with the device before the main experiment.Same calibration process as the previous evaluation study was conducted before the experiment.The participants were instructed to move their hands freely to interact with the virtual objects which would only be intractable within the physical interaction space.The main experiment consisted of two scenes: Snow and Rain.For each scene, a total of four feedback conditions were presented: No Feedback, Tactile Feedback, Cold Feedback, and Cold-Tactile Feedback.After each scene, participants were asked to fill out the questionnaire.A 5-minute break was provided between each scene.The entire experiment took 45 minutes per participant including the time to fill the questionnaire.

Results and Discussion
The continuous-scale ratings for participants' immersion, enjoyment, and satisfaction were collected and linearly scaled from 0 to 1 (0: Strongly Disagree, 1: Strongly Agree).Figure 13 shows the mean scores and standard error of participants' ratings.It is clear that participants preferred the Cold-Tactile Feedback condition in all measures for both scenes.The Tactile Feedback and Thermal Feedback conditions were less preferred than the Cold-Tactile Feedback condition while No Feedback was the lowest-rated condition overall.
We conducted a two-way ANOVA with repeated measures and found a highly significant effect of feedback conditions (<0.001) on all three measures.Post-hoc comparison using the Tukey test revealed that Cold-Tactile Feedback had a significant difference from the other three conditions (<0.001) for all three measures.We found no significant difference between Thermal Feedback and Tactile Feedback (=0.90 for Immersion, =0.90 for Enjoyment, and =0.72 for Satisfaction).
The results indicate that the quality of the virtual experience can be significantly enhanced by using cold and vibrotactile cues simultaneously (Cold-Tactile Feedback condition; mean rating: No Feedback = 0.44, Tactile Feedback = 0.65, Cold Feedback = 0.62, Cold-Tactile Feedback = 0.84).These results also show that participants preferred the No Feedback condition the least compared to the other conditions.
Questionnaire responses showed that participants considered the Cold-Tactile Feedback experience more complete.They mentioned feeling "weird" or "incomplete" in the Tactile Feedback condition as they expected to feel cold but adding cold feedback in the Cold-Tactile Feedback condition made them feel like they were touching snow and made them feel immersed.Another participant mentioned, "Having all the parts together made it the most realistic in comparison to only having the visuals!".Several participants mentioned perceiving a sensation of wetness, with one participant expressing, "The rain felt really accurate in what I was feeling to what I was seeing.I could feel that wetness where the raindrops hit my hand".8 GENERAL DISCUSSION 8.1 Key Takeaways paper presents a mid-air haptic user interface to create snowfall and rainfall experiences in VR.The interface uses a cold chamber and ultrasound display to create cold-tactile sensations of snowflakes and raindrops on the user's palm.We show that our interface and interaction techniques of visuo-haptic rendering can create a compelling multisensory experience by adopting a user-centric design approach with integrated cold-tactile multisensory cutaneous sensations.Here, we report several important key findings and insights that we discovered throughout this study.
Impact of Multisensory Experience.Our results indicated that the integrated cold-tactile cue enriched the user experience.We showed that cold and tactile feedback could be complemented by each other to deliver the most preferred and enjoyable user experience.A number of participants stated that they felt slightly warmer sensations in the Tactile Feedback condition but more realistic snow melting sensations with Cold-Tactile condition.
By integrating multiple sensory channels, designers can offer a more intricate and nuanced experience that would be challenging to replicate using a single modality.This leads to heightened feelings of realism, as observed in P11's comment: "Snow and rain definitely felt different thanks to their difference in weight, temperature, and speed."This remark suggests that participants were able to differentiate between sensations of snowfall and rainfall, indicating that the tactile experiences corresponding to each were accurately represented and distinct.
The participants also expressed a sense of deficiency or incompleteness during their exposure to the Cold Feedback and Tactile Feedback conditions.They reported a more comprehensive snowfall and rainfall experience when multisensory feedback in the form of Cold-Tactile was presented to them.This further emphasizes the idea that providing multisensory cutaneous feedback plays a crucial role in substantially improving the overall user experience.
Design Approach.We showed that a user-experienced-based approach could create an individual and multipoint visuo-haptic rendering that leads to a great user experience.We thoroughly studied the natural phenomenon and further designed the haptic rendering schemes based on users' past experiences.We further created light and heavy snowfall and rainfall experiences by considering the limitations of the haptic device and human capabilities of perceiving multiple particles.With the adaptive rendering design, users could feel aggregated haptic sensations for heavy particle conditions and individual haptic sensations for light particle conditions, leading to an enhanced multisensory VR experience.
The experimental setup utilized air as a medium for delivering both tactile and thermal sensations.The inclusion of ambient cold air enhanced the perceptual experience by presenting sensations volumetrically rather than in a two-dimensional manner.Temperature control was achieved through voltage adjustments while maintaining a gentle airflow to retain the accuracy of tactile perception.Interestingly, the impact of sound pressure density was negligible across various temperature ranges due to air's attenuation properties, highlighting the effectiveness of using air for non-contact multisensory feedback.
Unique Hardware Setup.Our unique hardware setup that integrates cold and tactile cues is based on unified thermal perception, and it can create enjoyable multisensory experiences without feeling the awkwardness of combined cues.Users could clearly perceive two different stimuli as unified individual sensory signals without masking each other.We found the perceived tactile patterns maintained a high identification rate under cold stimuli, and users clearly perceived different haptic rendering schemes of light and heavy snowfalls and rainfalls.
The setup has a module design that can be adjusted for different chamber shapes and amounts of cooling devices.The aluminum beam and 3D-printed connector structure can be migrated to different scenarios based on the interaction design, giving it potential for various body parts applications and a wide range of VR applications, spanning from entertainment to training simulations for hazardous environments.
Wetness Illusions.It is interesting to see that some participants reported feeling sensations of wetness on their hands, particularly during interactions with the rain scene.This observation is noteworthy, given that humans lack specific biological receptors for perceiving wetness.Current theories suggest that the perception of wetness arises from a learned amalgamation of sensory factors, primarily thermal and tactile sensations.In simpler terms, feeling wet isn't solely triggered by direct skin contact with moisture but rather by a complex integration of various sensory inputs [11,12].
Studies in Han et al. [21] and Peiris et al. [43] have used a combination of contact-based cold and tactile feedback to create the illusion of wetness on fingers and faces, respectively.Our work further demonstrates the potential to evoke the perception of wetness through the integration of cold and tactile feedback in mid-air, with the possibility of visual cues, as evidenced by our observations during the rain scene.

Design Guidelines
We propose a set of comprehensive design guidelines that encapsulate key insights from our research.These guidelines encompass aspects such as modality congruency, exaggeration for convincing experiences, and the importance of precise calibration and device placement.
Modality Congruency.Fewer haptic points can effectively render more visual points to maintain modality congruency.This observation aligns with findings by Pittera et al. [45], allowing for scenarios with a large number of visual points using only a few haptic points.Considering the limitations of ultrasound haptic displays in creating numerous pressure points, an adaptive approach using an aggregated rendering scheme can enhance performance in demanding conditions.
Exaggeration for Convincing Experiences.Exaggerating visual and haptic presentations contributes to a more convincing user experience.Initially, mimicking parameters from physical phenomena for haptic cues led to unsatisfactory user experiences.Incorporating slight exaggerations such as enlarging size, intensifying sensations, and adding visual effects is crucial for enhancing user immersion.
Precise Calibration and Placement.Accurate calibration and device placement are crucial for rendering precise and strong haptic cues.Proper hand tracking with precise calibration is essential for an immersive VR experience.Correct installation of ultrasound haptic displays, even when installed upside down, is vital to avoid weak haptic sensations and ensure a satisfactory user experience.

Opportunities, Limitations, and Future Works
Our technology has the potential to generate a range of effects, from weather-related sensations like mist or small debris impacts to different tactile experiences, which are common in outdoor VR settings.Also, it could be applied to different body parts by utilizing the sensation rendering scheme and chamber outlet surface modification.However, the current display is tailored to stimulate bare skin, and thus, the palms and fingers are the best candidates.
We report several limitations of our interface implementation and plan to address them.The lowest temperature that our interface can reach in the interaction space is 9.5 • C. We believe that this temperature level is a good achievement, but it's also a limitation of our interface.We plan to upgrade the Peltier packs to get a more substantial cooling effect and optimize the chamber design to achieve better fluid dynamics for cold airflow.Also, we plan to miniature the system while maintaining its performance by reducing the size of the chamber and the amount of Peltier through optimized computational fluid dynamics (CFD) thermal analysis and prototype testing.
We plan to enhance our system by incorporating heat sensations, thus expanding the range of sensory experiences.By integrating approaches from [49,50], we can create a unique immersive environment that blends hot and cold feedback, opening up new possibilities for user interactions in more engaging scenarios.Combining these systems can also simplify temperature control within the chamber, ensuring a seamless experience for users.We further plan to add humidifying modules to simulate the water vapor environment to enhance the user experience.Several potential applications for simulating fluid-based materials are being developed, including water faucets (implemented), shower heads, and waterfalls with enhanced hardware upgrades.

CONCLUSION
We designed and implemented a mid-air haptic user interface that creates snowfall and rainfall experiences in VR.Our interface design was based on unified thermal perception to generate cold-tactile sensations.We adopted a user-centric approach to design the multisensory snowfall experience based on user feedback and natural phenomena.Our results showed that providing multisensory cues enhanced user experience in snow and rain.

Fig. 2 .
Fig. 2. (a) Concept diagram of prototype system, (b) front view of the interface, and (c) an illustration of interaction with the interface.

Figure 3 (
c) shows the elapsed time from the temperature level for rain (15.5 • C) down to the temperature level for snow (9.5 • C), at an interval of 2 • C (i.e., 15.5, 13.5, 11.5, and 9.5 • C).

Fig. 4 .
Fig. 4. Temperature distribution showing the interpolation of measurement sensor array data at three layers in (a) 100 seconds, (b) 200 seconds, and (c) 300 seconds.

Fig. 5 .
Fig. 5. Haptic rendering schemes.(a) 1 Point, (b) 2 Points, (c) 3 Points, and (d) Aggregated.In one, two, and three points, only the most recently landed particles (containing a higher proportion of the blue color) are haptically rendered.In Aggregated conditions, a larger cue is used to represent particles that land close together.

Fig. 6 .
Fig. 6.Two visual scenes with hand interactions.(a) A winter scene with snow, (b) light snow, (c) heavy snow, (d) a jungle scene with rainfall, (e) light rain, and (f) heavy rain.

Figure 6 (
Figure 6 (d)  shows the rain scene.The user is placed on a balcony in a rainy jungle with trees.Three different rainfall settings are used for the background visuals depending on the emission rate of the interacting particles.Unlike snowfall scenes, the rain scene requires more dynamic weather changes.A light background rain is employed when the emission rate of interacting particles is less than nine drops per second.In addition to having a proportional number of raindrops, it supported minimal background effects like occasional lightning and thunder.A medium rainfall background is displayed when the emission rate is less than 17 drops per second.The medium rainfall background supports more thunderclouds, heavier lightning, and a little accumulated groundwater visible with splashes and ripples.Finally, if the rate of drop emission goes above 17 per second, a heavy rainfall background is displayed.The heavy rain renders fast falling, longer raindrops, thunderclouds, and a substantial amount of accumulated groundwater with more frequent splashes and ripples.These parameters were selected based on a preliminary study.The sound of raindrops is provided to enhance the realism of the multisensory experiences.

Fig. 7 .
Fig. 7. (a) Haptic patterns for TaskIntensity where the force of the haptic cue increases, decreases, or remains constant over time, (b) haptic patterns used for TaskSize where the diameter of the haptic cue increases, decreases, or remains constant over time, and (c) the nine areas on the hand where a haptic cue could be played for TaskLocation.

Fig. 11 .Fig. 12 .
Fig. 11.The mean number of the threshold of the 4 rendering schemes for snow and rain.

Fig. 13 .
Fig. 13.Mean scores and standard error of all the measures in different feedback conditions.(a) Snow scene and (b) rain scene.A '*' denotes high significance.